Home » Intelligent Design » An attempt at computing dFSCI for English language

An attempt at computing dFSCI for English language

Spread the love

In a recent post, I was challenged to offer examples of computation of dFSCI for a list of 4 objects for which I had inferred design.

One of the objects was a Shakespeare sonnet.

My answer was the following:

A Shakespeare sonnet. Alan’s comments about that are out of order. I don’t infer design because I know of Shakespeare, or because I am fascinated by the poetry (although I am). I infer design simply because this is a piece of language with perfect meaning in english (OK, ancient english).
Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in english is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB. As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.

In the discussion, I admitted however that I had not really computed the target space in this case:

The only point is that I have not a simple way to measure the target space for English language, so I have taken a shortcut by choosing a long enough sequence, so that I am well sure that the target space /search space ratio is above 500 bits. As I have clearly explained in my post #400.
For proteins, I have methods to approximate a lower threshold for the target space. For language I have never tried, because it is not my field, but I am sure it can be done. We need a linguist (Piotr, where are you?).
That’s why I have chosen and over-generous length. Am I wrong? Well, just offer a false positive.
For language, it is easy to show that the functional complexity is bound to increase with the length of the sequence. That is IMO true also for proteins, but it is less intuitive.

That remains true. But I have reflected, and I thought that perhaps, even if I am not a linguist and not even a amthematician, I could try to define better quantitatively the target space in this case, or at least to find a reasonable higher threshold for it.

So, here is the result of my reasonings. Again, I am neither a linguist nor a mathematician, and I will happy to consider any comment, criticism or suggestion. If I have made errors in my computations, I am ready to apologize.

Let’s start from my functional definition: any text of 600 characters which has good meaning in English.

The search space for a random search where every character has the same probability, assuming an alphabet of 30 characters (letters, space, elementary punctuation) gives easily a search space of 30^600, that is 2^2944. IOWs 2944 bits.

OK.

Now, I make the following assumptions (more or less derived from a quick Internet search:

a) There are about 200,000 words in English

b) The average length of an English word is 5 characters.

I also make the easy assumption that a text which has good meaning in English is made of English words.

For a 600 character text, we can therefore assume an average number of words of 120 (600/5).

Now, we compute the possible combinations (with repetition) of 120 words from a pool of 200000. The result, if I am right, is: 2^1453. IOWs 1453 bits.

Now, obviously each of these combinations can have n! permutations, therefore each of them has 120! different permutation, that is 2^660. IOWs 660 bits.

So, multiplying the total number of word combinations with repetitions by the total number of permutations for each combination, we have:

2^1453 * 2^660 = 2^2113

IOWs, 2113 bits.

What is this number? It is the total number of sequences of 120 words that we can derive from a pool of 200000 English words. Or at least, a good approximation of that number.

It’s a big number.

Now, the important concept: in that number are certainly included all the sequences of 600 characters which have good meaning in English. Indeed, it is difficult to imagine sequences that have good meaning in English and are not made of correct English words.

And the important question: how many of those sequences have good meaning in English? I have no idea. But anyone will agree that it must be only a small subset.

So, I believe that we can say that 2^2113 is a higher threshold for out target space of sequences of 600 characters which have a good meaning in English. And, certainly, a very generous higher threshold.

Well, if we take that number as a measure of our target space, what is the functional information in a sequence of 600 characters which has good meaning in English?

It’s easy: the ratio between target space and search space:

2^2113 / 2^ 2944 = 2^-831. IOWs, taking -log2, 831 bits of functional information. (Thank you to drc466 for the kind correction here)

So, if we consider as a measure of our functional space a number which is certainly an extremely overestimated higher threshold for the real value, still our dFSI is over 800 bits.

Let’s go back to my initial statement:

Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in english is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB. As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.

Was I wrong? You decide.

By the way, another important result is that if I make the same computation for a 300 character string, the dFSI value is 416 bits. That is a very clear demonstration that, in language, dFSI is bound to increase with the length of the string.

928 Responses to An attempt at computing dFSCI for English language

  1. Has anyone discussed/considered Roger Penrose’s criticism of algorithmic consciousness as presented in his works like “The Emperor’s New Mind”. There he use the incompleteness theorems to show that mental actions exceed the ability to account for consciousness, like the ability of Shakespeare for example, in terms of algorithmic search. He propose a non-algorithmic quantum effect. I am not qualified to give more than an interested lay person’s perspective. I am reading William Dembski’s “Being as Communion” and also don’t see this proposal from Penrose and Hameroff being discussed.

  2. mullerpr:

    I am a big fan of Penrose’s argument, even if I don’t necessarily agree with his proposed explanatory model for consciousness.

    You may also be interested in this paper:

    http://www.blythinstitute.org/.....tlett1.pdf

    which explores similar concepts.

    In my opinion, consciousness is a primary reality, which has its laws and powers. Its fundamental ability to always be able to go to a “metalevel” in respect to its contents and representations, due to the transcendental nature of the “I”, is the true explanation for Turing’s theorem and its consequences, including Penrose’s argument.

    The same is true for design: it is a product of consciousness, and that’s the reason why it can easily generate dFSCI, while nothing else in the universe can.

    The workings of consciousness use the basic experiences of meaning (cognition), feeling (purpose) and free will. Design is the result of those experiences. dFSCI is the magic result of them.

  3. Thank you gpuccio, this is exactly the way I interpret the work of Penrose in this regards. I also see that more and more people consider consciousness from a non-materialistic, non-reductionist perspective. I actually read Thomas Nagel’s “Mind and Cosmos” before “Being as Communion”, and it was refreshing to see Dembski incorporating the proposed teleology from Nagel. Is see this as a far more rational metaphysics than naturalism or materialism.

    I am aware that Penrose see his method as a only non-reductionist and non-algorithmic, but still materialist… However I think Penrose, Nagel and Dembski (and others) are independently closing in on a post-materialist and/or post-classical mechanics explanation of reality. This looks like the stuff of a scientific revolution!

  4. mullerpr:

    “I am aware that Penrose see his method as a only non-reductionist and non-algorithmic, but still materialist… However I think Penrose, Nagel and Dembski (and others) are independently closing in on a post-materialist and/or post-classical mechanics explanation of reality. This looks like the stuff of a scientific revolution!”

    It does! And it is. 🙂

    And ID theory has a very important role in that scenario.

  5. mullerpr:

    The quantum level is certainly fundamental to understand conscious processes. But I think that it works as an interface between consciousness and the brain, a la Eccles. That’s how conscious experiences and material events can exchange information without violating any physical law. That’s how we design and, very likely, how the biological designer designed biological things.

  6. This is just for fun, but uh, er, hm, how did you skip from those 30 character options right up to words?

    For a 600 character text, we can therefore assume an average number of words of 120

    Shouldn’t that be “120 groups of characters?” What I mean is it seems like there are many many more factors of strings of characters that are not good ol’ words? Was this just one aspect of your kind, conservatism in the math, or did I miss something?

  7. On the consciousness/brain interface I also agree and find it very interesting that some serious science projects aim at finding the physical structure that can enstantiate a mind from the fundamental property of consciousness in nature…
    Allen Institute’s Christof Koch on Computer Consciousness | MIT Technology Review

    http://www.technologyreview.co.....conscious/

    This I also see as just another agreement that mind is not matter, and mind is the only known design capable entity.

  8. Tim:

    What do you mean?

    I am trying to compute the target space, that is the set of sequences that have good meaning in English. IOWs, sequences which are made of English words.

    If a sequence is made of other groupings of characters which are not English words, it will not have a good meaning in English and it will not be part of the target space.

    Did I miss something?

  9. gpuccio,

    You’re repeating your earlier mistakes.

    I already showed, in the other thread, that the dFSCI calculation is a complete waste of time:

    gpuccio,

    We can use your very own test procedure to show that dFSCI is useless.

    Procedure 1:

    1. Look at a comment longer than 600 characters.
    2. If you recognize it as meaningful English, conclude that it must be designed.
    3. Perform a pointless and irrelevant dFSCI calculation.
    4. Conclude that the comment was designed.

    Procedure 2:

    1. Look at a comment longer than 600 characters.
    2. If you recognize it as meaningful English, conclude that it must be designed.
    3. Conclude that the comment was designed.

    The two procedures give exactly the same results, yet the second one doesn’t even include the dFSCI step. All the work was done by the other steps. The dFSCI step was a waste of time, mere window dressing.

    Even your own test procedure shows that dFSCI is useless, gpuccio.

  10. I think the issue is the input info required just to be able to seach for English words cannot be discounted… It should at least be a dictionary full of words to be added as input to the search algorithm.

    Did I miss something? ????

  11. Another comment from that thread worth reposting here:

    gpuccio,

    We’ve been over this many times, but the problem with your dFSCI calculations is that the number they produce is useless.

    The dFSCI number reflects the probability that a given sequence was produced purely randomly, without selection. No evolutionary biologists thinks the flagellum (or any other complex structure) arose through a purely random process; everyone thinks selection was involved. By neglecting selection, your dFSCI number is answering a question that no one is asking. It’s useless.

    There is a second aspect of dFSCI that is a boolean (true/false) variable, but it depends on knowing beforehand whether or not the structure in question could have evolved. You can’t use dFSCI to show that something couldn’t have evolved, because you already need to know that it couldn’t have evolved before you attribute dFSCI to it. It’s hopelessly circular.

    What a mess. The numerical part of dFSCI is useless because it neglects selection, and the boolean part is also useless because the argument that employs it is circular.

    dFSCI is a fiasco.

  12. keith s, you are not a very critical thinker are you?

    What about your objection supports your assertions? Can you highlight it maybe? My search for an argument failed, but you seem to be convinced there is an argument. So, go for, it… What would it be?

  13. Reposting another comment comparing the flaws of CSI, FSCO/I, and dFSCI:

    Learned Hand, to gpuccio:

    Dembski made P(T|H), in one form or another, part of the CSI calculation for what seem like very good reasons. And I think you defended his concept as simple, rigorous, and consistent. But nevertheless you, KF, and Dembski all seem to be taking different approaches and calculating different things.

    That’s right.

    Dembski’s problems are that 1) he can’t calculate P(T|H), because H encompasses “Darwinian and other material mechanisms”; and 2) his argument would be circular even if he could calculate it.

    KF’s problem is that although he claims to be using Dembski’s P(T|H), he actually isn’t, because he isn’t taking Darwinian and other material mechanisms into account. It’s painfully obvious in this thread, in which Elizabeth Liddle and I press KF on this problem and he squirms to avoid it.

    Gpuccio avoids KF’s problem by explicitly leaving Darwinian mechanisms out of the numerical calculation. However, that makes his numerical dFSCI value useless, as I explained above. And gpuccio’s dFSCI has a boolean component that does depend on the probability that a sequence or structure can be explained by “Darwinian and other material mechanisms”, so his argument is circular, like Dembski’s.

    All three concepts are fatally flawed and cannot be used to detect design.

  14. keith s:

    Old stuff.

    Have you anything to say about this post?

    Have you anything to say about the computation?

  15. Reposting this one, also:

    gpuccio, to Learned Hand:

    I will explain what is “simple, beautiful and consistent” about CSI. It is the concept that there is an objective complexity which can be linked to a specification, and that high values of that complexity are a mark of a design origin.

    gpuccio,

    That is true for Dembski’s CSI, but not your dFSCI. And as I pointed out above, Dembski’s CSI requires knowing the value of P(T|H), which he cannot calculate. And even if he could calculate it, his argument would be circular.

    Your “solution” makes the numerical value calculable, at the expense of rendering it irrelevant. That’s a pretty steep price to pay.

    There are indeed different approaches to a formal definition of CSI and of how to compute it,

    Different and incommensurable.

    a) I define a specification as any explicit rule which generates a binary partition in a search space, so that we can identify a target space from the rest of objects in the search space.

    Which is already a problem, because evolution does not seek out predefined targets. It takes what it stumbles upon, regardless of the “specification”, as long as fitness isn’t compromised.

    b) I define a special subset of SI: FSI. IOWs, of all possible types of specification I choose those where the partition is generated by the definition of a function.

    c) I define a subset of FSI: those objects exhibiting digital information.

    d) I define dFSI the -log2 of the ratio of the target space / the search space.

    This is why the numerical value of dFSCI is irrelevant. Evolution isn’t searching for that specific target, and even if it were, it doesn’t work by random mutation without selection. By omitting selection, you’ve made the dFSCI value useless.

    e) I categorize the value of dFSI according to an appropriate threshold (for the system and object I am evaluating, see later). If the dFSI is higher than the threshold, I say that the object exhibits dFSCI (see later for the evaluation of necessity algorithms)

    To infer design for an object, the procedure is as follows:

    a) I observe an object, which has its origin in a system and in a certain time span.

    b) I observe that the configuration of the object can be read as a digital sequence.

    c) If I can imagine that the object with its sequence can be used to implement a function, I define that function explicitly, and give a method to objectively evaluate its presence or absence in any sequence of the same type.

    d) I can define any function I like for the object, including different functions for the same object. Maybe I can’t find any function for the object.

    e) Once I have defined a function which is implemented by the object, I define the search space (usually all the possible sequences of the same length).

    f) I compute, or approximate, as much as possible, the target space, and therefore the target space/search space ratio, and take -log2 of that. This is the dFSI of the sequence for that function.

    h) I consider if the sequence has any detectable form of regularity, and if any known explicit algorithm available in the system can explain the sequence. The important point here is: there is no need to exclude that some algorithm can logically exist that will be one day found, and so on. All that has no relevance. My procedure is an empiric procedure. If an algorithmic explanation is available, that’s fine. If no one is available, I go on with my procedure.

    Which immediately makes the judgment subjective and dependent on your state of knowledge at the time. So much for objectivity.

    i) I consider the system, the time span, and therefore the probabilistic resources of the system (the total number of states that the system can reach by RV in the time span). So I define a threshold of complexity that makes the emergence by RV in the system and in the time span of a sequence of the target space an extremely unlikely event. For the whole universe, Dembski’s UPB of 500 bits is a fine threshold. For biological proteins on our planet, I have proposed 150 bits (after a gross calculation).

    Again, this is useless because nobody thinks that complicated structures or sequences come into being by pure random variation. It’s a numerical straw man.

    l) If the functional complexity of the sequence I observe is higher than the threshold (IOWs, if the sequence exhibits dFSCI), and if I am aware of no explicit algorithm available in the system which can explain the sequence, then I infer a design origin for the object. IOWs, I infer that the specific configuration which implements that function originated form a conscious representation and a conscious intentional output of information form a designer to the object.

    In other words, you assume design if gpuccio is not aware of an explicit algorithm capable of producing the sequence. This is the worst kind of Designer of the Gaps reasoning. It boils down to this: “If gpuccio isn’t aware of a non-design explanation, it must be designed!”

  16. keith s:

    It may seem that I have payed you to increase the number of the comments in my OP. 🙂

    Good job!

  17. How would natural selection translate into a search algorithm with a non-random search capability of selecting benefit? I suppose survival or “more” successful replication also has a “say” in this so called “almost stochastic” system of evolving flagellum/s. That looks like a very information rich search scenario to me. The information from the combined “environment & survival” system fascinates me most… Just how much and what kind of information must be available in that system? (I suspect Keith S, don’t see it to be problematic, but at least Jerry Fodor sees it)

    http://www.amazon.com/What-Dar.....0374288798

    Did I miss something?

  18. gpuccio:

    Old stuff.

    Devastating stuff. Why should my criticisms change when your dFSCI concept hasn’t?

    Have you anything to say about this post?

    Yes. It repeats the errors that I point out in the comments I just reposted.

    Have you anything to say about the computation?

    Yes. The computation is useless, for reasons that I explain in the comments I just reposted.

  19. mullerpr:

    “I think the issue is the input info required just to be able to search for English words cannot be discounted… It should at least be a dictionary full of words to be added as input to the search algorithm.”

    You are perfectly right. That’s what Dembski and Marks call “added information”. The best is always Dawkins, with his magic algorithm which can find a phrase that it already knows!

    And if they had the whole English dictionary in the algorithm, still they couldn only easily find the subset of good words, but the task of finding the subset of the subset, passages with good meaning, would remain unsurmountable.

    And if they had vast catalogues of well formed sentences, they could only find those sentences which they have, or similar to them. Still, a 600 character passage of original meaning would be out of range.

    That’s why no algorithm can generate original language: algorithms have no idea of what meaning is, they can only recycle passively the meanings that have been “frozen” in them.

    That’s why dFSCI is a sure marker of design. Unfortunately for keith! 🙂

    (keith, I am still waiting for a false positive. You can use this thread, so my comments will increase even more…)

  20. It may seem that I have payed you to increase the number of the comments in my OP.

    I often wonder that the hostility and inanity of most of the anti-ID people is due to that they may be double agents and produce incoherent irrelevant comments to make the pro-ID people look good. Or that they are mindless and egged on by someone who is a double agent.

  21. keith s:

    “Yes. The computation is useless, for reasons that I explain in the comments I just reposted.”

    Good. My only interest here is that the computation is correct. 🙂

  22. jerry:

    Now you found out!

    If you are interested, keith is not very expensive… 🙂

  23. mullerpr:

    How would natural selection translate into a search algorithm with a non-random search capability of selecting benefit?

    That’s as silly as asking “How does an unintelligent sieve know how to sort particles non-randomly by size?”

    The organisms with the beneficial mutations are the ones that do best at surviving and reproducing.

  24. keith s:

    And as I pointed out above, Dembski’s CSI requires knowing the value of P(T|H), which he cannot calculate.

    That is incorrect as CSI does not appear in that paper. Also it is up to you and yours to provide “H” and you have failed to do so. Stop blaming us for your failures.

    The dFSCI number reflects the probability that a given sequence was produced purely randomly, without selection.

    What is there to select if nothing works until it is all together? Why can’t keith s show that the addition of natural selection would change the calculation?

    AGAIN, CSI and dFSCI exist REGARDLESS of how they arose. The point of using them as intelligent design indicators is because every time we have observed them and knew the cause it has ALWAYS been via intelligent design. We have NEVER observed nature producing CSI nor dFSCI.

    There isn’t anything in ID that prevents nature from producing CSI and there isn’t anything in the equation that neglects natural selection. keith s is all fluff.

  25. keith s:

    The organisms with the beneficial mutations are the ones that do best at surviving and reproducing.

    That is too vague to be of any use.

  26. Mullerpr

    Keith S is the village dirt worshipper, he believes that dirt not only made itself but magically became alive all by itself… matter in Keith’s opinion can create CSI and can build anything using unguided procesess, highly complicated engineering marvels anything poof into existence and nothing can do it a trillion times, than a designer….

    You’ve really missed nothing…..

  27. mullerpr:

    NS can do almost nothing. Don’t believe the neo darwinian propaganda. They have nothing.

    At the biochemical level, where an enzyme is needed, or a wonderful biological machine like ATP synthase, NS is powerless. I have challenged anyone here to offer even the start of an explanation for just two subunits of ATP synthase, alpha and beta.

    Look here, point 3:

    http://www.uncommondescent.com.....on-part-1/

    Together, the two chains are 553 + 529 = 1082 AAs long.

    That is a search space of 4676 bits, greater than the Shakespeare sonnet.

    Together, they present 378 perfectly conserved aminoacid positions from LUCA to humans, which point to a target space of at least 1633 bits, probably greater than the Shakespeare sonnet (we cannot say for certain, because we have only lower thresholds of complexity, 831 bits for the sonnet, 1633 for the molecule, but the molecule seems to win!).

    Interesting, isn’t it?

  28. Any comments on the computation itself?

  29. True story about them not having anything….

    Still waiting for Keith S to explain how unguided evolution built multiple stability control mechanism in cells…..

    Nothing yet

  30. Gpuccio, I am no biologist, but reading as much as I can from James Shapiro’s Evolution: A view from the 21st century, made it abundantly clear that genetic variation is far more complex and system driven than ever before realised.

    It seems as if the only gaps being filled are the ones cased by Darwinian ignorance. So sad to see their treasured dogma creating an explanatory vacuum in their mind… It must be painful, not to be able to move forward in science.

  31. gpuccio,

    Any comments on the computation itself?

    A correct computation of an irrelevant number is still irrelevant, so it doesn’t matter whether the computation is correct.

    Evolution includes selection, and your number fails to take selection into account.

  32. gpuccio:

    NS can do almost nothing. Don’t believe the neo darwinian propaganda. They have nothing.

    Baghdad Bob:

    There are no American infidels in Baghdad. Never!

  33. keith s, are you kidding me?
    You said:
    “That’s as silly as asking “How does an unintelligent sieve know how to sort particles non-randomly by size?”

    Who made rhe sieve that can size particles? Do you know many sieve like things in nature? How does size distribution become information?

    You really don’t think your thoughts through do you, keith?

  34. What’s the point?

    ID critics can’t even manage to admit that their own posts here at UD are intelligently designed. I think perhaps it’s time for us to consider seriously the idea that they aren’t.

  35. Hi gpuccio

    I am afraid I can’t comment on the calculation as my maths is not good enough, but I do wonder about the point of it. I had always thought dFSCI and its variants were a tool to detect design. But I think I remember you saying that is not the case, but dFSCI can be identified in all strings that we know are designed. So my question is where do you go from there? Lets’s say you are right and you have discovered that this “thing” is present in all passages of recognisable text. What do we then use this finding to do? What can we achieve with it? (Given that we can’t use it to analyse a passage of unrecognisable text (let alone a flagellum) to determine whether it was designed or not).

  36. gpuccio

    OT:

    Sorry to post this OT link in your new OP, but I thought you would like to check it out – see two consecutive posts in this link:

    http://www.uncommondescent.com.....ent-527182

  37. 5for:

    From the other thread:

    Me_Think at #644:

    “gpuccio explained that dFSCI doesn’t detect design, only confirms if a design is real design or apparent design.”

    I don’t understand what you mean.

    dFSCI is essential to distinguish between true design and apparent design, therefore it is an essential part of scientific design detection.

    If you are not able to distinguish between true design and apparent design, you are making no design detection you are only making recognition of the appearance of design, which is not a scientific procedure because it has a lot of false positives and a lot of false negatives. So, just recognition of the appearance of design is not scientific design detection.

    On the contrary, dFSCI eliminates the false positives, and design detection becomes a scientific reality. Therefore, dFSCI is an essential part of scientific design detection.

    Surely you can understand such a simple concept, can you?

    And from another post:

    Learned Hand:

    I will explain what is “simple, beautiful and consistent” about CSI. It is the concept that there is an objective complexity which can be linked to a specification, and that high values of that complexity are a mark of a design origin.

    This is true, simple and beautiful. It is the only objective example of something which can only derive from a conscious intentional cognitive process.

    There are indeed different approaches to a formal definition of CSI and of how to compute it, and of how to interpret the simple fact that it is a mark of design. I have tried to detail my personal approach, mainly by answering the many objections of my kind interlocutors. And yes, there are slight differences between my approach and, for example, Dembski’s, especially after the F. My approach is essentially a completely pragmatic formulation of the EF.

    In brief.

    a) I define a specification as any explicit rule which generates a binary partition in a search space, so that we can identify a target space from the rest of objects in the search space.

    b) I define a special subset of SI: FSI. IOWs, of all possible types of specification I choose those where the partition is generated by the definition of a function.

    c) I define a subset of FSI: those objects exhibiting digital information.

    d) I define dFSI the -log2 of the ratio of the target space / the search space.

    e) I categorize the value of dFSI according to an appropriate threshold (for the system and object I am evaluating, see later). If the dFSI is higher than the threshold, I say that the object exhibits dFSCI (see later for the evaluation of necessity algorithms)

    To infer design for an object, the procedure is as follows:

    a) I observe an object, which has its origin in a system and in a certain time span.

    b) I observe that the configuration of the object can be read as a digital sequence.

    c) If I can imagine that the object with its sequence can be used to implement a function, I define that function explicitly, and give a method to objectively evaluate its presence or absence in any sequence of the same type.

    d) I can define any function I like for the object, including different functions for the same object. Maybe I can’t find any function for the object.

    e) Once I have defined a function which is implemented by the object, I define the search space (usually all the possible sequences of the same length).

    f) I compute, or approximate, as much as possible, the target space, and therefore the target space/search space ratio, and take -log2 of that. This is the dFSI of the sequence for that function.

    h) I consider if the sequence has any detectable form of regularity, and if any known explicit algorithm available in the system can explain the sequence. The important point here is: there is no need to exclude that some algorithm can logically exist that will be one day found, and so on. All that has no relevance. My procedure is an empiric procedure. If an algorithmic explanation is available, that’s fine. If no one is available, I go on with my procedure.

    i) I consider the system, the time span, and therefore the probabilistic resources of the system (the total number of states that the system can reach by RV in the time span). So I define a threshold of complexity that makes the emergence by RV in the system and in the time span of a sequence of the target space an extremely unlikely event. For the whole universe, Dembski’s UPB of 500 bits is a fine threshold. For biological proteins on our planet, I have proposed 150 bits (after a gross calculation).

    l) If the functional complexity of the sequence I observe is higher than the threshold (IOWs, if the sequence exhibits dFSCI), and if I am aware of no explicit algorithm available in the system which can explain the sequence, then I infer a design origin for the object. IOWs, I infer that the specific configuration which implements that function originated form a conscious representation and a conscious intentional output of information form a designer to the object.

    m) Why? This is the important point. This is not a logical deduction. The procedure is empirical. It can be applied as it has been described. The simple fact is that, if applied to any object whose origin is independently known (IOWs, we can know if it was designed or not, so we use it to test the procedure and see if the inference will be correct) it has 100% specificity and low sensitivity. IOWs, there are no false positives.

    IOWs, there is no object in the universe (of which we can know the origin independently) for which we would infer design by this procedure and be wrong.

    Now, I will do a quick test. There are 560 posts in this thread. While I know independently that they are designed things, for a lot of reasons, I state here that any post here longer than 600 characters, and with good meaning in English, is designed. And I challenge you to offer any list of characters longer than 600, as many as you like, where you can mix two types of sequences: some are true posts in good English, with a clear meaning, taken from any blog you like. Others will be random lists of characters, generated by a true random character generator software.

    Well, hear me! I will recognize all the true designed posts, and I will never make a falsely positive design inference for any of the other lists.

    Now, you can try any trick. You can add posts in languages that I don’t know. You can add encryption of true posts that I will not recognize. Whatever you like. I will not recognize their meaning, and I will not infer design. They will be false negatives. You know, my procedure has low sensitivity.

    However, I will infer design for all the posts which have good meaning in English, and I will be right. And I will never infer design for a sequence which is the result of a random character generator.

    What about algorithms? Well, you can use any algorithm you like, but without adding any information about what has good meaning in English. IOWs, you cannot use the Weasel algorithm, where the outcome is already in the system. You cannot use an English dictionary, least of all a syntax correction software. Again, that would be recycling functional information, not generating it. But you can use an algorithm which generates sequences according to the Fibonacci series, if you like. Or an algorithm which takes a random character and generates lists with 600 same characters. Whatever you like. Because I am not using order as a form of specification. I am using meaning. And meaning cannot be generated by necessity algorithms.

    So, if I see a sequence of 600 A, I will not infer design for it. But for a Shakespeare sonnet I will.

    This is a challenge. My procedure works. It works not because is is a logical theorem. Not because I have hidden some keithian circularity in it (why should a circular procedure work, at all?). It works because we can empirically verify that it works.

    IOWs there could be sequences which are not designed, ad which are not obvious results of an algorithm, and which have high functional information. There could be. It is not logically impossible.

    But none of those sequences is known. They simply don’t exist. In the known universe, of all the objects of which we know the origin, only designed object will be inferred as designed by the application of my procedure. Again, falsify this statement if you can. Offer one false positive. One.

    Except for… Except, obviously, for biological objects. They are the only known objects in the universe which exhibit dFSCI, tons of it, and of which we don’t know the origin.

    But that is exactly the point. We don’t know their origin. But they exhibit dFSCI. In tons.

    So, I infer design for them (or at least, for those which certainly exhibit dFSCI).

    Is any algorithm known explicitly which could explain the functional information, say, in ATP synthase?

    No. There is nothing like that. There is the RV + NS. But it cannot explain that. Not explicitly. Only dogma supports that kind of explanation.

    The simple fact is: both complex language and complex function never derive from simple necessity algorithms. You cannot write a Shakespeare sonnet by a simple mathematical formula. You cannot find the sequence of ATP synthase by a simple algorithm. Maybe we could do it by a very complex algorithmic search, which includes all our knowledge of biochemistry, present and future, and supreme computational resources. We are still very distant from that achievement. And the procedure would be infinitely more complex than the outcome, and it would require constant conscious cognition (design).

    Well, I have not been so brief, after all.

    Now, if there are parts of my reasoning which are not clear enough, just ask. I am here.

    Or, if you just want to falsify my empirical procedure, offer a false positive. I am here.

    More likely, you can simply join keith in the group of the denialists. But at least, you will know more now of what you are denying.

    I apologize for answering by quoting answers to others, but really I cannot follow a crowd of people who ask the same things. My main purpose here was to verify the computation with the help, or the criticism, of all.

  38. Another comment worth reposting:

    Learned Hand,

    We’ve tumbled into a world where Logic is not spoken.

    KF and gpuccio claim that FSCO/I are dFSCI are useful. Gpuccio suggested a test procedure to prove this.

    Yet both KF and gpuccio admit that you don’t even need to do the calculation. It reveals absolutely nothing that you didn’t already know.

    Why would anyone bother?

    Gpuccio, can you come up with a test procedure in which dFSCI actually does something useful, for a change?

    It’s pretty clear why you and KF don’t submit papers on this stuff. Even an ID-friendly journal would probably reject it, unless they were truly desperate.

  39. keith:

    You are really beyond comprehension.

    You repost a post where you say:

    “Yet both KF and gpuccio admit that you don’t even need to do the calculation.”

    as a comment to an OP where I have done the calculation?

    I will never understand you!

  40. Dionisio:

    Thank you, as always. 🙂

  41. keith:

    Why do you both repost and link to the original?

    Is that functional redundancy? A secret aspiration to robustness? An attempt to reach an atemporal singularity?

  42. 5for,

    Here is my response to the second part of gpuccio’s #37:

    gpuccio, to Learned Hand:

    I will explain what is “simple, beautiful and consistent” about CSI. It is the concept that there is an objective complexity which can be linked to a specification, and that high values of that complexity are a mark of a design origin.

    gpuccio,

    That is true for Dembski’s CSI, but not your dFSCI. And as I pointed out above, Dembski’s CSI requires knowing the value of P(T|H), which he cannot calculate. And even if he could calculate it, his argument would be circular.

    Your “solution” makes the numerical value calculable, at the expense of rendering it irrelevant. That’s a pretty steep price to pay.

    There are indeed different approaches to a formal definition of CSI and of how to compute it,

    Different and incommensurable.

    a) I define a specification as any explicit rule which generates a binary partition in a search space, so that we can identify a target space from the rest of objects in the search space.

    Which is already a problem, because evolution does not seek out predefined targets. It takes what it stumbles upon, regardless of the “specification”, as long as fitness isn’t compromised.

    b) I define a special subset of SI: FSI. IOWs, of all possible types of specification I choose those where the partition is generated by the definition of a function.

    c) I define a subset of FSI: those objects exhibiting digital information.

    d) I define dFSI the -log2 of the ratio of the target space / the search space.

    This is why the numerical value of dFSCI is irrelevant. Evolution isn’t searching for that specific target, and even if it were, it doesn’t work by random mutation without selection. By omitting selection, you’ve made the dFSCI value useless.

    e) I categorize the value of dFSI according to an appropriate threshold (for the system and object I am evaluating, see later). If the dFSI is higher than the threshold, I say that the object exhibits dFSCI (see later for the evaluation of necessity algorithms)

    To infer design for an object, the procedure is as follows:

    a) I observe an object, which has its origin in a system and in a certain time span.

    b) I observe that the configuration of the object can be read as a digital sequence.

    c) If I can imagine that the object with its sequence can be used to implement a function, I define that function explicitly, and give a method to objectively evaluate its presence or absence in any sequence of the same type.

    d) I can define any function I like for the object, including different functions for the same object. Maybe I can’t find any function for the object.

    e) Once I have defined a function which is implemented by the object, I define the search space (usually all the possible sequences of the same length).

    f) I compute, or approximate, as much as possible, the target space, and therefore the target space/search space ratio, and take -log2 of that. This is the dFSI of the sequence for that function.

    h) I consider if the sequence has any detectable form of regularity, and if any known explicit algorithm available in the system can explain the sequence. The important point here is: there is no need to exclude that some algorithm can logically exist that will be one day found, and so on. All that has no relevance. My procedure is an empiric procedure. If an algorithmic explanation is available, that’s fine. If no one is available, I go on with my procedure.

    Which immediately makes the judgment subjective and dependent on your state of knowledge at the time. So much for objectivity.

    i) I consider the system, the time span, and therefore the probabilistic resources of the system (the total number of states that the system can reach by RV in the time span). So I define a threshold of complexity that makes the emergence by RV in the system and in the time span of a sequence of the target space an extremely unlikely event. For the whole universe, Dembski’s UPB of 500 bits is a fine threshold. For biological proteins on our planet, I have proposed 150 bits (after a gross calculation).

    Again, this is useless because nobody thinks that complicated structures or sequences come into being by pure random variation. It’s a numerical straw man.

    l) If the functional complexity of the sequence I observe is higher than the threshold (IOWs, if the sequence exhibits dFSCI), and if I am aware of no explicit algorithm available in the system which can explain the sequence, then I infer a design origin for the object. IOWs, I infer that the specific configuration which implements that function originated form a conscious representation and a conscious intentional output of information form a designer to the object.

    In other words, you assume design if gpuccio is not aware of an explicit algorithm capable of producing the sequence. This is the worst kind of Designer of the Gaps reasoning. It boils down to this: “If gpuccio isn’t aware of a non-design explanation, it must be designed!”

  43. gpuccio,

    You repost a post where you say:

    “Yet both KF and gpuccio admit that you don’t even need to do the calculation.”

    as a comment to an OP where I have done the calculation?

    Of course. That’s my point. As I said above:

    A correct computation of an irrelevant number is still irrelevant, so it doesn’t matter whether the computation is correct.

    Evolution includes selection, and your number fails to take selection into account.

  44. gpuccio,

    Why do you both repost and link to the original?

    So that readers can see the comment in its original context, if they desire.

  45. 20 Jerry

    I often wonder that the hostility and inanity of most of the anti-ID people is due to that they may be double agents and produce incoherent irrelevant comments to make the pro-ID people look good. Or that they are mindless and egged on by someone who is a double agent.

    Sometimes I’ve thought of that too. The irrational nature of the anti-ID attacks and the clueless commentaries of the ‘n-D e’ folks make me think those guys are paid double agents just pretending. Who knows? Maybe it’s true? It would be disappointing to discover they use this tricky tactic in this blog. That’s why I try hard to avoid falling into the traps of their senseless arguments, but sometimes can’t resist the temptation to getting involved in the discussions too. My bad. Fortunately often my comments are completely ignored by most commenters, hence I don’t last long in those discussions threads.

    🙂

  46. In these and other posts and comments on his blog, Joe explains how to measure CSI. kairosfocus, Barry, and other IDists, gaze upon the brilliant words of your fellow traveler and ilk (just two of kairosfocus’s favorite attack terms when he constantly lumps, slanders, and falsely accuses “evomats”, atheists, agnostics, scientists, alleged “enablers”, anti-ID blogs that he calls “fever swamps”, etc., etc., etc.:

    http://intelligentreasoning.bl.....ified.html

    http://intelligentreasoning.bl.....epost.html

    There’s more here: http://intelligentreasoning.blogspot.com/

  47. Mullerpr@33: “Who made rhe sieve that can size particles? Do you know many sieve like things in nature? How does size distribution become information?”

    Have you ever walked on a beach? That is only possible because of non-designed sieve-like thing.

  48. Reality:

    Was that a comment on my computation? If it is, it’s very subtle.

  49. 20 jerry

    GP wrote that at least one of them is not a very expensive double agent? I wonder how much the blog pays them?

    I could use a few bucks now and then… maybe this is one of the ‘make easy money online’ ads I’ve seen out there?

    perhaps if I practice writing nonsense and asking senseless questions I could pretend being one of those guys and get hired by this blog as another anti-ID double agent?

    Got to find how to submit my application. Do they require a CV or resumè too? probably no photo ID or any other ID required, because they hire anti-ID pretenders. 🙂

  50. Keith s,
    1) Despite your admiration, natural selection serves as a subtractive force in a search – it reduces the number of spaces searched. It doesn’t directly affect either the target space, or the search space, numerically – it simply reduces the number of tries (think of it as rolling 10 dice, and then removing all the 2’s and 3’s – you’ve reduced your ability to hit the target if the target requires 2’s and 3’s). Natural selection makes it harder for evolution to get a good result, not easier. One reason for the ready acceptance of neutral theory is to improve the odds hurt by NS.

    2) “If you recognize it as meaningful English, conclude that it must be designed have function.” When you fix this glaring error in your “logic”, it is obvious you have completely mistated the issue. The process is detect function/specificity, calculate complexity, determine design – not detect design, calculate complexity, determine design. It is certainly possible to detect function (e.g. computer generates “sky is blue”) without design (search was random). Your logic fails.

    gpuccio,
    I think there’s an issue with this:

    2^2113 / 2^ 2944 = 2^831. IOWs, 831 bits.

    From a strictly mathematical sense, your ratio is inverted.

  51. Friends:

    I am honored of the many comments, but still I would like to outline a few points in the OP which could be of interest, is someone wants to consider them (keith is exonerated, I don’t want him to waste time in irrelevant things, when he has to work hard to reposting here).

    So, if the computation here is correct, a few interesting things ensue:

    1) It is possible to compute the target space, and therefore dFSCI, for specific search spaces by some reasonable, indirect method. Of course, each space should be analyzed with appropriate methods.

    2) Nobody seems to object that he knows some simple algorithm which can write a passage of 600 characters which has good meaning in English. Where are all those objections about how difficult it is to exclude necessity, and about how that generates circularity, and about how that is bound to generate many false positives?

    The balance at present:

    a) Algorithms proposed to explain Shakespeare’s sonnet (or any other passage of the same length in good English): none.

    b) False positives proposed: none.

    c) True positives found: a lot. For example, all the posts in the last thread that were longer than 600 characters (there were a few).

    3) We have a clear example that functional complexity, at least in the language space, is bound to increase hugely with the increase in length of the string. This is IMO an important result, very intuitive, but now we have a mathematical verification. Moreover, while the above reasoning is about language, I believe that it is possible in principle to demonstrate it also for other functional spaces, like software and proteins.

    Any comments? Maybe there is some room left in the intervals between one of keith’s reposts and the following. 🙂

  52. #46 Reality

    It would be appreciated if “off topic” commentaries are explicitly labeled as OT (for example see post 36).

    Thus the readers can skip the post when they see the label ‘OT’ at the beginning of the comment.

    BTW, are you out of touch with the meaning of your pseudonym? 🙂

    Oops! Just realized I forgot to mark posts 45 and 49 as OT.

    My fault. Do as I say, not as I do. 🙂

  53. Dionisio:

    “Got to find how to submit my application.”

    Encrypted, of course! 🙂

  54. drc466:

    “gpuccio,

    I think there’s an issue with this:

    2^2113 / 2^ 2944 = 2^831. IOWs, 831 bits.

    From a strictly mathematical sense, your ratio is inverted.”

    Thank you! That is a stupid error.

    The ratio is correct (it’s the ratio of the target space to the search space), but the result is wrong: it should be 2^-831. Then, the -log2 becomes 831 bits of functional complexity.

    Thank you really! That’s exactly what I needed. I will immediately correct the OP.

    If you find any other error, please tell me.

  55. gpuccio @51 –

    1) It is possible to compute the target space, and therefore dFSCI, for specific search spaces by some reasonable, indirect method. Of course, each space should be analyzed with appropriate methods.

    True. But how would you define the search space for an organism. For example, what’s the search space for (say) a strain of the ‘flu virus?

  56. Let me ask something. If all english speakers died out and then chinese scientists discovered english texts, could they calculate its dFsci? Could they tell the meaningful from the gibberish?

    Also, what is meant by information in your calculation? The information content in this sentence is not found separately in each word but in their associations. So I could write “red happens glory fishing diamond wrangler” and although each word has meaning, the phrase itself has none. Can that be calculated or determined somehow? Can an objective number be placed on it? 12 units of meaning?

  57. 57

    gpuccio,

    Great OP. I Find it fascinating

    It is very similar to something I’ve been kicking around for comparing graphical representations of designed phenomena verses data resulting from a combination of random and algorithmic process.

    I too would very much like to see the evidence of false positives.

    Could a critic please link to an algorithm that yields positive number of bits using this calculation?

    If not could said critic provide evidence that such an algorithm is at least possible in theory.

    Once we have cleared that low hurdle we can begin the discussion of whether any of this is useful or at all relevant to biology,

    one thing at a time.

    Peace

  58. Bob O’H:

    “True. But how would you define the search space for an organism. For example, what’s the search space for (say) a strain of the ‘flu virus?”

    Of course, it’s simpler to compute dFSCI for smaller items. Usually I apply it to proteins. See the example of ATP synthase.

    We could apply the concept to a whole genome, like that of a virus. The search space is not a big problem, because it can be defined as all possible nucleotide sequences of that length (4^n). But any computation of the target space will depend on the function we define for the object, and the target space can be very complex to analyze for big functional objects like a whole viral genome.

    For a protein, it is easier to define an appropriate function. Usually I prefer to stick to the “local” function, that is the biochemical activity. That is certainly the best solution for enzymes.

  59. Even Richard Dawkins believes in calculating design. His minimally designed phrase was “METHINKS IT IS LIKE A WEASEL.”

    Not quite a sonnet.

  60. Collin @ 56. I take it you understand NOTHING about ID theory. Nothing. Would you at least cop to that before I put out the effort to answer your oh so misguided questions?

  61. gpuccio, Joe commented in this thread, and since Joe claims to know all about CSI I thought that everyone here could learn all about it by looking at his brilliant explanations on his blog. Besides, linking to other sites or bringing up what has been said by others in another thread is a common action by IDists here so I don’t see why there should be any problem with my doing the same.

    Regarding your “computation”, does your “computation” have anything to do with measuring, calculating, or computing CSI in anything other than English text which is already known to be designed? For example, can and will you please measure, calculate, or compute CSI in an elephant, a cancer cell, and a galaxy cluster? Thanks in advance.

  62. Centrestream,

    Sand on the beach… Is that really going to be presented as an analogue for Natural Selection?

    I like physical necessity when it comes to things like orbital paths for planets, chemical bonds in minerals, mechanical action etc. But the patterns created by necessity is by definition bad at carrying information new. If there is no degree of freedom there is no information carrying capacity.

    The clear distinction between life and the uniformity of physical processes convinced Hubert Yockey that the definition of life is informational in contrast to non-informational physical processes… The first biological information he concluded is an axiomatic concept not explained by natural processes.

    Information Theory, Evolution, and The Origin of Life
    http://www.amazon.com/gp/aw/d/.....ot_redir=1

  63. 63

    What gpuccio is trying to do here is demonstrate an objective Turing test.

    What the critic’s algorithm needs to do is fool us into believing that it is intelligent.

    very interesting!!!!!!

    How about a simpler example to help those of us who struggle with the big numbers?

    If the “me thinks it’s a weasel” program was shown to have not smuggled in it’s information how many bits would it have produced?

    peace

  64. Collin:

    Please, take the time to review my procedure reposted here by me at #37.

    Design detection by dFSCI is a procedure with 100% specificity and low sensitivity. It has no false positives, and many false negatives.

    The main reason for false negatives is that the observer cannot see the function and define it.

    So, in your example, if the text is in a language I don’t know and I don’t understand its meaning, I cannot define a function as “having good meaning in this language”. So, I will not infer design, and that will be a false negative.

    False positives, to my best knowledge, don’t exist. Unless someone here proposes one. So, if we infer design, we can be rather certain of our inference.

    Regarding information, I give no special meaning to the word: only what I have explicitly defined. Please, see my OP about that:

    http://www.uncommondescent.com.....n-defined/

    The relevant part:

    So, the general definitions:

    c) Specification. Given a well defined set of objects (the search space), we call “specification”, in relation to that set, any explicit objective rule that can divide the set in two non overlapping subsets: the “specified” subset (target space) and the “non specified” subset. IOWs, a specification is any well defined rule which generates a binary partition in a well defined set of objects.

    d) Functional Specification. It is a special form of specification (in the sense defined above), where the rule that specifies is of the following type: “The specified subset in this well defined set of objects includes all the objects in the set which can implement the following, well defined function…” . IOWs, a functional specification is any well defined rule which generates a binary partition in a well defined set of objects using a function defined as in a) and verifying if the functionality, defined as in b), is present in each object of the set.

    It should be clear that functional specification is a definite subset of specification. Other properties, different from function, can in principle be used to specify. But for our purposes we will stick to functional specification, as defined here.

    e) The ratio Target space/Search space expresses the probability of getting an object from the search space by one random search attempt, in a system where each object has the same probability of being found by a random search (that is, a system with an uniform probability of finding those objects).

    f) The Functionally Specified Information (FSI) in bits is simply –log2 of that number. Please, note that I imply no specific meaning of the word “information” here. We could call it any other way. What I mean is exactly what I have defined, and nothing more.

    IOWs, FSI is only -log2 of the probability of finding the target space. It is a measure of the functional bits, the number of bits which are absolutely necessary to implement the function. More intuitively, it’s the quantity of information necessary to implement the defined function.

    If you read the whole OP linked above, you may understand better my definitions.

    You say:

    “The information content in this sentence is not found separately in each word but in their associations. So I could write “red happens glory fishing diamond wrangler” and although each word has meaning, the phrase itself has none. Can that be calculated or determined somehow? Can an objective number be placed on it? 12 units of meaning?”

    No. If you follow with attention my reasoning in the OP of this thread, you will see that my functional definition is: any sequence of 600 characters which has good meaning in English.

    Therefore, for a sequence to be specified (to be part of the target space), the whole sequence must have good meaning in English.

    But, you may say that what I have computed as target space is the total number of combinations and permutations of English words in 600 characters. That’s true. But I have done that only because so I have a higher threshold for the functional space (and therefore a lower threshold for the functional complexity). Why? Because the set of all sequences which have good meaning in English is certainly a small subset of the set of all sequences made by English words, and is included in it.

    That’s why I say:

    Now, the important concept: in that number are certainly included all the sequences of 600 characters which have good meaning in English. Indeed, it is difficult to imagine sequences that have good meaning in English and are not made of correct English words.

    And the important question: how many of those sequences have good meaning in English? I have no idea. But anyone will agree that it must be only a small subset.

    So, I believe that we can say that 2^2113 is a higher threshold for out target space of sequences of 600 characters which have a good meaning in English. And, certainly, a very generous higher threshold.

    Clear?

  65. Mung: “Collin @ 56. I take it you understand NOTHING about ID theory. Nothing. Would you at least cop to that before I put out the effort to answer your oh so misguided questions?”

    A little less condescension and a little more civil discourse would be appropriate. Now that you have been corrected, could you please explain why Collin’s question is misguided? If it is the answer that I expect you to give, it will be based on a misguided understanding of evolution. Please, enlighten us.

  66. Mung, what “ID theory” are you referring to? I don’t think that Collin’s questions are misguided. They are good, relevant questions that deserve a good, relevant response.

  67. Reality:

    I have just done it for ATP synthase. Look at post #27 here.

    You may perhaps understand that the specificity of the procedure must be tested with objects of which we can assess independently the origin, and then be applied to objects whose origin is controversial. So, this discussion about language is important.

    You may perhaps understand that an elephant, a cancer cell, and a galaxy cluster are not digital sequences. So, I prefer to apply the procedure to proteins.

    Do you agree that it is a relevant application?

  68. gpuccio, you said: “Design detection by dFSCI is a procedure with 100% specificity and low sensitivity. It has no false positives, and many false negatives.”

    If that’s true it’s only because dFSCI is a useless term that is used by IDists to make it look as though scientific methods are being used to detect design, even though the alleged design detection pertains only to things that are already known to be designed.

    On another note, you said: “Design detection by dFSCI is a procedure…”. Design detection by doing what with dFCSI?

    According to Joe, CSI=dFSCI=FSC=FSCO/I. Do you agree with him?

  69. fifthmonarchyman:

    I appreciate your insightful comment!

    About ““me thinks it’s a weasel”, it depends. If you define the target as that specific phrase, its length is 23 characters, the search space is about 113 bits, and as there is only one sequence which satisfies the definition, the functional space is 1 and the functional complexity is -log2 of 1:2^113, again 113 bits. Not too much, not too little. For many systems and time spans, it would be enough to infer design. After all, 10^34 is a big number.

    But if you define the function differently, for example any sequence of that length which has good meaning in English, things change. Applying the method I have used, which is probably less precise for short sequences, the functional complexity is about 25 bits. IOWs there are about 3 probabilities in 100,000,000 to get a positive result. Quite in the range of many random systems.

  70. mullerpr: Do you know many sieve like things in nature?

    Non-living nature if full of natural sieves. If not, then the Earth would be homogeneous, which it is not. Gold and salt are found concentrated in some places, water in others. Indeed, there are natural water pumps to replenish the headwaters of rivers so that the running water can continue to shape and sort the rocks. The movement of sun and moon and wind and surf make the sand on the beach.

    gpuccio: Nobody seems to object that he knows some simple algorithm which can write a passage of 600 characters which has good meaning in English.

    If you had an oracle which could return relative meaning, and if you consider “the king” to have more meaning than “king”, the former being specific, then an evolutionary algorithm should be able to create long sequences of meaning. Perhaps you could have the snippets read to Elizabethan audiences, and rate them by applause.

    gpuccio: An attempt at computing dFSCI for English language… I have no idea.

    The density of meaningful sonnets is an interesting question, but let’s grant that Shakespeare’s sonnets were the result of an intelligent mind.

    gpuccio: 2^2113 / 2^ 2944 = 2^-831

    That’s fine, but it can be shown that, given a suitable oracle, words can evolve from letters, and sentences from words. Indeed, if you feed the little genomes based on their iambic character, they’ll evolve into iambs.

    But just start with words and the 200,000 word dictionary you so kindly provided for our oracle. As the algorithm can generate sequences of words, that means your calculation becomes 2^2113 / 2^ 2113 = 1. We’ve already crossed a distance of 2^-831.

  71. 71

    Another reason why it is best to start with something like an English phrase rather than biology is that logically it should be much easier to-produce a false positive for a short sequence of letters than for a protein sequence.

    Again I would love to see the algorithm that can create a false positive here.

    Since we are dealing with text I see no reason such an algorithm could not be put together on a laptop with no special software.

    come on critics give it a go.

    peace

  72. Reality:

    Please. read post #37 here for the procedure.

    Please, read post #661 here:

    http://www.uncommondescent.com.....mutations/

    for the acronyms.

  73. a) There are about 200,000 words in English

    Nowhere nearly enough. OED has more than that and Webster’s 2nd has more than twice as many. And that’s without including a similar number of place and personal names, all of which are valid in English text.

  74. Reality: “According to Joe, CSI=dFSCI=FSC=FSCO/I. Do you agree with him?”

    My CSI=dFSCI=FSC=FSCO/I has detected a sock. You slick devil you.

  75. Zachriel:

    Nice to hear from you.

    I agree with you: with the appropriate oracle, you can do anything.

    And so?

    The the 200,000 word dictionary, for example, is rather complex as an oracle.

    What an algorithmic oracle can never do is to generate new complex meaning because it understands that meaning. Algorithmic oracles can only recycle frozen meaning.

    Conscious oracles, instead, understand meaning. It’s all another matter.

    Please, look at this interesting paper:

    Using Turing Oracles in Cognitive Models of Problem-Solving

    http://www.blythinstitute.org/.....tlett1.pdf

  76. Roy:

    Don’t be fastidious. I took the number from the Internet.

    OK, I have redone the computation for 500,000 words. Is that enough for you?

    The dFSCI is now 673 bits. You can check yourself.

    And do you realize how much I am underestimating the dFSCI when I take the total number of sequences made by English words as target space, instead of taking the total number of sequences which have good meaning in English?

    So, don’t be fastidious.

  77. gpuccio: The the 200,000 word dictionary, for example, is rather complex as an oracle.

    About 10^7 bits.

    gpuccio: I agree with you: with the appropriate oracle, you can do anything.

    There has to be some reasonable continuity in relative reward or it won’t work.

    gpuccio: Algorithmic oracles can only recycle frozen meaning.

    If you define it so it can’t exist, then sure.

  78. Zachriel:

    You have always been elegant in your skirmishes. That’s why I like you! 🙂

  79. gpuccio: Algorithmic oracles can only recycle frozen meaning.

    If you can’t objectively judge the meaning of a phrase or sonnet, then you’re fairly well stuck. However, we can certainly evolve sequences of words as words are somewhat frozen by convention. Grammar, as well. This gives us strings of words, which would seemingly have more meaning than random letters.

  80. Zachriel:

    I agree with that. I remember your softwares.

  81. 81

    gpuccio,

    you said,

    if you define the function differently, for example any sequence of that length which has good meaning in English, things change.

    I say,

    exactly!!!!!!

    Think of this measure as having 2 axises. The X axis is the lengthen of the sequence and the y axis is the “meaning threshold” I’m evaluating .

    The lower on the y axis you are the longer the string needs to be for me to infer design.

    For some one familiar with this debate “me thinks it’s a weasel” is loaded with meaning.

    For an average Joe it might take a whole sonnet to pass the threshold.

    On the other hand if I were looking at a string of text in Chinese it might take a string the length of a whole play to pass the tet because I would be looking for mere arbitrary structure and grammar as apposed English words.

    But even in that case I would be able to give the sequence a real objective value and compare it to strings that were the result of combination of algorithmic and random processes

    peace

  82. gpuccio, I want to think about your response and will get back to you later.

  83. centrestream:

    A little less condescension and a little more civil discourse would be appropriate.

    Civil discourse requires honesty.

  84. Reality:

    I don’t think that Collin’s questions are misguided. They are good, relevant questions that deserve a good, relevant response.

    Why do you think they are they good questions?

    Why do you think they are relevant?

  85. gpuccio,

    You failed to provide a way to measure “meaning” whatever that means.

  86. Puccini, thanks. It’s clearer.

  87. Collin:

    Puccini, thanks. It’s clearer.

    Puccini? Is that your auto-correct talking?

  88. I guess so 🙂

  89. drc466:

    1) Despite your admiration, natural selection serves as a subtractive force in a search – it reduces the number of spaces searched.

    Yes, and that’s a good thing. Maximizing the amount of space searched is not the “goal”. Searches have a cost.

    It doesn’t directly affect either the target space, or the search space, numerically – it simply reduces the number of tries…

    Evolution does not seek out specific targets. It isn’t “trying” to find the flagellum, or binocular vision, or opposable thumbs. If it stumbles on something good, whatever that happens to be, it keeps it. If it stumbles on something bad, whatever that happens to be, it tosses it.

    (The above neglects drift, of course. With drift, beneficial mutations can sometimes be lost and deleterious mutations can sometimes be fixed.)

    When you define a target space in terms of a specific function, as gpuccio does, you are making a huge mistake, because evolution is not seeking that specific target. It is seeking anything that improves fitness.

    Gpuccio compounds the error by taking the ratio of the target space to the entire search space. That makes the dFSCI number useless for anything other than a purely random search.

    Evolution is not a purely random search. It includes selection. Why waste time calculating a number that neglects selection?

    2) “If you recognize it as meaningful English, conclude that it must be designed have function.” When you fix this glaring error in your “logic”, it is obvious you have completely mistated the issue. The process is detect function/specificity, calculate complexity, determine design – not detect design, calculate complexity, determine design.

    No, the result of the calculation simply tells us that the sequence in question could not have come about by a purely random search. We knew that already, so the calculation is pointless.

    All of the work gets done by the other, boolean component of dFSCI — not the numerical value.

    And the boolean component of dFSCI boils down to what I described earlier:

    In other words, you assume design if gpuccio is not aware of an explicit algorithm capable of producing the sequence. This is the worst kind of Designer of the Gaps reasoning. It boils down to this: “If gpuccio isn’t aware of a non-design explanation, it must be designed!”

    The calculation and the result — the number of bits of dFSCI — are pure window dressing. They are designed to look mathy and sciencey, but they have no actual value and can be completely dispensed with.

  90. 90

    We get it Keiths, you think this is all a waste of time.

    understood we read you loud and clear

    Now why not humor us and point us to an operation that combines random and algorithmic processes that is capable of giving a false positive in gpuccio’s Turing test.

    It does not have to be an evolutionary algorithm. It does not even have to include a random component any algorithm will do.

    I promise once you do this small thing we will get to the nitty gritty of explaining why we think this is so important

    peace

  91. “I have just done it for ATP synthase”

    I don’t think you have. Ignoring other significant issues (modelling evolution from a precursor as a random goal oriented search), you’ve simply no idea what fraction of sequence space gives a functional ATP synthase. You guess by aligning three sequences.

    1) Nice cheat on the Archaeal sequence–using the one with maximum identity to the others. It is thought to be acquired through horizontal gene transfer. True archaeal ATP synthases have far less identity, so knock it off already with these silly 50% or whatever identity across all life #s.

    2) Evolution hits on solutions, and get stuck in local optima. Rubisco is possibly the worst enzyme ever, but there it is, roughly the same in all plants. Designers (human) have already worked around it. That a sequence is conserved in evolution in NO way indicates it is the only solution in sequence space. Just a contingent solution that has persisted.

    3) Despite this, some ATP synthase lineages have diversified. Plug an ATP synthase from apicomplexia into your alignment. What is that? It doesn’t align at all, save some topologies and one key arginine? So how many bits is that??? Hmm…

    http://www.ncbi.nlm.nih.gov/pu.....t=Abstract
    http://www.ncbi.nlm.nih.gov/pm.....MC2881411/
    http://www.plosbiology.org/art.....io.1000418
    http://www.nature.com/nature/j.....13776.html

  92. 92

    Keith’s said

    In other words, you assume design if gpuccio is not aware of an explicit algorithm capable of producing the sequence.

    I say,

    I’m not sure if gpuccio is willing to go this far but I would say that there are certain sequences that algorithms are mathematically incapable of producing.

    That means any possible algorithm.

    would you disagree with this claim?

    peace

  93. FMM:

    Now why not humor us and point us to an operation that combines random and algorithmic processes that is capable of giving a false positive in gpuccio’s Turing test.

    Because what you are calling “gpuccio’s Turing test” isn’t a test of dFSCI at all.

    Here’s what the GTT boils down to:

    1. Present a 600-character text to gpuccio.
    2. If gpuccio recognizes it as meaningful English, then conclude that the text was designed.

    The dFSCI calculation isn’t required. It accomplishes nothing.

    I keep asking gpuccio for an example in which dFSCI actually does something useful, but he can’t come up with one.

  94. 94

    Keith’s

    Again we understand you think this is all a waste of time

    How about this? produce an algorithm capable of producing a 600-character English text independently with out smuggling information through the back door.

    Call it what ever you want. feel free to disregard the calculation

    Do you think such an algorithm is even possible? What would convince you that it is not?

    peace

  95. FMM,

    This thread is about dFSCI. I’m not interested in your proposed digression.

    I would like to see an example in which dFSCI actually serves a useful purpose.

    Can you think of one? Gpuccio seems unable to.

  96. 96

    Well I guess I will rest my case then.

    I can think of all kinds of useful purposes.

    For example I’m working on a method to evaluate the strength of forecasting models at my place of employment.

    The more “CSI” found in the actual data the weaker the model will be.

    Peace

  97. FMM:

    Well I guess I will rest my case then.

    What case? Did you make an argument?

    I can think of all kinds of useful purposes [for the calculation of dFSCI]?

    Please share some of them! Gpuccio hasn’t been able to come up with any, and I’m sure he’d be grateful.

  98. Sorry — that should be a period, not a question mark, at the end of the quote.

  99. KF:

    We don’t actually need to quantify to recognise, but we can quantify and the result is the quantification helps us see how hard it is for the atomic and temporal resources of the observed cosmos to arise beyond sparse search of very large config spaces implied by the possible arrangements of parts vs the tight configurational constraints implied by needs of interactive, specific functional organisation. KF

    GP:

    http://www.uncommondescent.com.....ent-527189

  100. The question is: do I need to calculate dFSCI to see if sonnet is designed? The answer is :no. I can see sonnet is designed without the need to calculate, so I am not sure what is being achieved here.

  101. Me_Think:

    The question is: do I need to calculate dFSCI to see if sonnet is designed? The answer is :no. I can see sonnet is designed without the need to calculate, so I am not sure what is being achieved here.

    Exactly. The calculation is completely unnecessary.

  102. keith s

    Why do you want to see a calculation?

    Is that important to you? Why?

    If an example is given, would you ask for another?

    If ten examples are provided, would you demand eleven?

    Is it possible, like someone suggested today, that you were hired by this blog to write what you write, in order to provoke certain folks to keep heated arguments, hence increase the number of posts in the discussion threads and increase the traffic in the blog?

    🙂

  103. Zachriel,
    Natural processes flowing from the uniformity of classical mechanics … Is that really going to be presented as an analogue for Natural Selection? I don’t think you saw the critical questions I asked.

    I like physical necessity when it comes to things like orbital paths for planets, chemical bonds in minerals, mechanical action etc. But the patterns created by necessity is by definition bad at carrying information new. If there is no degree of freedom there is no information carrying capacity.

    The clear distinction between life and the uniformity of physical processes convinced Hubert Yockey that the definition of life is informational in contrast to non-informational physical processes… The first biological information he concluded is an axiomatic concept not explained by natural processes.
    Information Theory, Evolution, and The Origin of Life
    http://www.amazon.com/gp/aw/d/…..ot_redir=1

    P.S. I would like you to discuss any sieve design with a minaral processing engineer, and he/she will tell you how symmerty in the behaviour of nature do not descriminate they way his/her processing plant does.

  104. #100 Me_Think

    Does this link answer your question?:

    http://www.uncommondescent.com.....ent-527381

    Now, can you answer the questions in this link?:

    http://www.uncommondescent.com.....ent-527389

    Thank you.

  105. #103 mullerpr

    Interesting commentary. Thank you.

    BTW, I could not open the link you provided.

  106. keith s

    Exactly. The calculation is completely unnecessary.

    The purpose of a dFSCI calculation is not to convince anyone in the scientific community of its design detection worth.

    The purpose of a dFSCI calculation is merely for gpuccio to convince himself he was specially created by his loving God.

  107. #96 fifthmonarchyman

    For example I’m working on a method to evaluate the strength of forecasting models at my place of employment.

    That sounds interesting.

  108. #106 Adapa

    This is for you too:

    http://www.uncommondescent.com.....ent-527392

  109. keith s:

    gpuccio,

    We can use your very own test procedure to show that dFSCI is useless.

    Procedure 1:

    1. Look at a comment longer than 600 characters.
    2. If you recognize it as meaningful English, conclude that it must be designed.
    3. Perform a pointless and irrelevant dFSCI calculation.
    4. Conclude that the comment was designed.

    Procedure 2:

    1. Look at a comment longer than 600 characters.
    2. If you recognize it as meaningful English, conclude that it must be designed.
    3. Conclude that the comment was designed.

    The two procedures give exactly the same results, yet the second one doesn’t even include the dFSCI step. All the work was done by the other steps. The dFSCI step was a waste of time, mere window dressing.

    Even your own test procedure shows that dFSCI is useless, gpuccio.

    Aren’t you missing something?

    If you omit step #3 of Procedure 1 in Procedure 2, then step#3 in Procedure 2 is completely meaningless.

    The whole point of gpuccio’s “procedure” is to compare the recognition of “design” that is naturally made with the use of a particular language, and the values that are generated using dFSCI. Shouldn’t that be clear to you?

    Gpuccio’s DFCSI isn’t useless, your Procedure 2 is useless.

  110. Keith S

    Exactly. The calculation is completely unnecessary.

    But I must protest here Keith! Did you recognise a design and accept it? Design all around us then but not in biological systems? Why would that be?

    I’ll tell you why, you have to deny design in biology because if you accept it you have to accept that you have been created by a designer, I believe that you find that idea repugnant, and I even know why, and so do you!

  111. Dionisio asked:

    “Why do you want to see a calculation?’

    Because you IDists claim that you can calculate CSI-dFSCI-FSCO/I.

    “Is that important to you? Why?”

    To see if you can, and laugh at you when you can’t.

    “If an example is given, would you ask for another?”

    Yes.

    “If ten examples are provided, would you demand eleven?”

    Provide ten and then we’ll see.

    “Is it possible, like someone suggested today, that you were hired by this blog to write what you write, in order to provoke certain folks to keep heated arguments, hence increase the number of posts in the discussion threads and increase the traffic in the blog?”

    It’s possible but extremely unlikely. So unlikely that it’s safe to say that what you implied is incredibly childish and trollish. How old are you, 6, 7?

  112. fifthmonarchyman at #81:

    Absolutely correct! 🙂

  113. keith s:

    The dFSCI number reflects the probability that a given sequence was produced purely randomly, without selection. No evolutionary biologists thinks the flagellum (or any other complex structure) arose through a purely random process; everyone thinks selection was involved. By neglecting selection, your dFSCI number is answering a question that no one is asking. It’s useless.

    Why are you substituting a question regarding “irreducible complexity” for one that involves the random generation of DNA strings?

    gpuccio’s argument is not about IC.

    Yes, NS does act on what comes about randomly, and thus, there is a non-random component to the process. Nevertheless, the only thing NS does is to “eliminate” that which cannot either ‘live’ or ‘compete.’ NS doesn’t ‘form’ the DNA string, it either accepts or eliminates.

    The “sonnet” that gpuccio is using for his example represents a “protein” that is found in nature, encoded in extant DNA. There are only so many known proteins and protein families. If each DNA string that is generated—you’re not positing that DNA is generated non-randomly are you?—is generated ‘randomly,’ then the proteins and protein families we know of are the “survivors” of NS—as in “survival of the fittest.”

    This means that the “sonnet” represents one of a number of acceptable forms on English words pieced together in a string of ‘letters’ that runs 600 letters long. It is like a protein family. The entire collection of such “combinations” represents an entirety of all such “protein families” found in nature, and, thus, presumably culled by NS from the “search space” strings of length 600 letters.

    Your invocation of NS does nothing to change his calculations, nor his logic.

  114. keith s:

    Evolution does not seek out specific targets. It isn’t “trying” to find the flagellum, or binocular vision, or opposable thumbs. If it stumbles on something good, whatever that happens to be, it keeps it. If it stumbles on something bad, whatever that happens to be, it tosses it.

    Yes, NS “stumbles.” What is given it comes about “randomly;” but, all NS does, and can do, is either ‘eliminate,’ or ‘not eliminate.’ When the “search space” is enormous, then an ‘enormous’ number of ‘eliminations’ must take place. It is simply impossible for ‘nature’ to provide this enormity of possibilities. Hence, NS is rendered, except in minor ways, “useless.” The “minor ways” where NS is “useful,” we call “microevolution.”

    But, this is a digression since gpuccio is simply trying to demonstrate that dFCSI calculations can eliminate “false positives.”

  115. keith s:

    Again, the aim of this thread is not to re-discuss the whole issue of dFSCI and design detection, but only to propose a computetion of dFSCI in language.

    I have discussed in great detail the “any possible function” argument here:

    http://www.uncommondescent.com.....mutations/

    Post #400.

    As I have already said, I don’t like repetition.

    I have discussed the role of eliminating necessity in that same thread, for example, at posts #599 and #604.

    As I have already said, I don’t like repetition.

    I have been discussing the lack of explanatory power of the RV + NS myth for years, in very great detail. You can find some thoughts on the difference between Natural Selection and Intelligent selection at post #524 of the above referenced thread, and a lot of other detailed stuff in posts of mine practically everywhere at UD.

    As I have already said, I don’t like repetition.

    But just to humot you a little, a very very brief summary:

    Negative NS is a powerful mechanisms, and it works essentially against the RV + NS algorithm.

    Positive NS is almost non existent, limited to a few irrelevant microevolutionary scenarios, and can never help generate new complex functions, because complex functions cannot be deconstructed into naturally selectable simpler steps, neither in the general case (which is requested for the algorithm to work) nor in any single real example (which would at least be start). Moreover, if positive NS had had some role in generating the biological functional information, we should see tons of traces of naturally selectable functional intermediates in the proteome. We don’t.

    Finally, genetic drift is completely irrelevant to the probabilistic computation, and in no way helps to lower the probabilistic barriers.

  116. Me_Think at #100:

    “The answer is :no. I can see sonnet is designed without the need to calculate”

    How?

  117. keith s #101:

    “The calculation is completely unnecessary.”

    Why?

    Guys, please clarify how can you reliably infer design for the sonnet without any calculation.

  118. Adapa:

    “The purpose of a dFSCI calculation is merely for gpuccio to convince himself he was specially created by his loving God.”

    Really? What an argument. I am overwhelmed.

  119. PaV at #110:

    Absolutely correct! Thank you.

  120. Reality at #112:

    Is what you see in this OP a calculation?

  121. PaV:

    Thank you for your contributions. It’s beautiful to have you here! 🙂

  122. gpuccio,

    You get exactly the same answer whether or not you do the calculation, in 100% of the cases. Why waste time on a calculation that adds no value whatsoever?

    I repeat:

    gpuccio,

    We can use your very own test procedure to show that dFSCI is useless.

    Procedure 1:

    1. Look at a comment longer than 600 characters.
    2. If you recognize it as meaningful English, conclude that it must be designed.
    3. Perform a pointless and irrelevant dFSCI calculation.
    4. Conclude that the comment was designed.

    Procedure 2:

    1. Look at a comment longer than 600 characters.
    2. If you recognize it as meaningful English, conclude that it must be designed.
    3. Conclude that the comment was designed.

    The two procedures give exactly the same results, yet the second one doesn’t even include the dFSCI step. All the work was done by the other steps. The dFSCI step was a waste of time, mere window dressing.

    Even your own test procedure shows that dFSCI is useless, gpuccio.

  123. REC at #91:

    My argument about those two sequences is about their conservation in a complex molecule. You can scarcely deny that those specific sequences are necessary, with that high level of conservation, to the working of ATP synthase in its common form, and especially the form which utilizes H+ gradients.

    The Apicomplexa paper you link describes a very different complex molecule, made of many different protein sequences, and is a complex example of a different engineering solution. In no way it is in contradiction with the functional specification of the sequences I examined in the traditional ATP synthase complex. I paste here the abstract of that interesting paper, for all to read:

    “Highly Divergent Mitochondrial ATP Synthase Complexes in Tetrahymena thermophila

    Abstract

    The F-type ATP synthase complex is a rotary nano-motor driven by proton motive force to synthesize ATP. Its F1 sector catalyzes ATP synthesis, whereas the Fo sector conducts the protons and provides a stator for the rotary action of the complex. Components of both F1 and Fo sectors are highly conserved across prokaryotes and eukaryotes. Therefore, it was a surprise that genes encoding the a and b subunits as well as other components of the Fo sector were undetectable in the sequenced genomes of a variety of apicomplexan parasites. While the parasitic existence of these organisms could explain the apparent incomplete nature of ATP synthase in Apicomplexa, genes for these essential components were absent even in Tetrahymena thermophila, a free-living ciliate belonging to a sister clade of Apicomplexa, which demonstrates robust oxidative phosphorylation. This observation raises the possibility that the entire clade of Alveolata may have invented novel means to operate ATP synthase complexes. To assess this remarkable possibility, we have carried out an investigation of the ATP synthase from T. thermophila. Blue native polyacrylamide gel electrophoresis (BN-PAGE) revealed the ATP synthase to be present as a large complex. Structural study based on single particle electron microscopy analysis suggested the complex to be a dimer with several unique structures including an unusually large domain on the intermembrane side of the ATP synthase and novel domains flanking the c subunit rings. The two monomers were in a parallel configuration rather than the angled configuration previously observed in other organisms. Proteomic analyses of well-resolved ATP synthase complexes from 2-D BN/BN-PAGE identified orthologs of seven canonical ATP synthase subunits, and at least 13 novel proteins that constitute subunits apparently limited to the ciliate lineage. A mitochondrially encoded protein, Ymf66, with predicted eight transmembrane domains could be a substitute for the subunit a of the Fo sector. The absence of genes encoding orthologs of the novel subunits even in apicomplexans suggests that the Tetrahymena ATP synthase, despite core similarities, is a unique enzyme exhibiting dramatic differences compared to the conventional complexes found in metazoan, fungal, and plant mitochondria, as well as in prokaryotes. These findings have significant implications for the origins and evolution of a central player in bioenergetics.

    Author Summary

    Synthesis of ATP, the currency of the cellular energy economy, is carried out by a rotary nano-motor, the ATP synthase complex, which uses proton flow to drive the rotation of protein subunits so as to produce ATP. There are two main components in mitochondrial F-type ATP synthase complexes, each made up of a number of different proteins: F1 has the catalytic sites for ATP synthesis, and Fo forms channels for proton movement and provides a bearing and stator to contain the rotary action of the motor. The two parts of the complex have to interact with each other, and critical protein subunits of the enzyme are conserved from bacteria to higher eukaryotes. We were surprised that a group of unicellular organisms called alveolates (including ciliates, apicomplexa, and dinoflagellates) seemed to lack two critical proteins of the Fo component. We have isolated intact ATP synthase complexes from the ciliate Tetrahymena thermophila and examined their structure by electron microscopy and their protein composition by mass spectrometry. We found that the ATP synthase complex of this organism is quite different, both in its overall structure and in many of the associated protein subunits, from the ATP synthase in other organisms. At least 13 novel proteins are present within this complex that have no orthologs in any organism outside of the ciliates. Our results suggest significant divergence of a critical bioenergetic player within the alveolate group.”

  124. PaV,

    If you omit step #3 of Procedure 1 in Procedure 2, then step#3 in Procedure 2 is completely meaningless.

    Exactly! I think you’re close to understanding this!

    Steps 3 and 4 are useless in procedure 1, and step 3 is useless in procedure 2.

    All of the useful work is done by steps 1 and 2:

    1. Look at a comment longer than 600 characters.
    2. If you recognize it as meaningful English, conclude that it must be designed.

    The calculation adds nothing.

    Now, could you please point this out to gpuccio before he embarrasses himself further? He won’t accept it from me, but he might from you.

    Gpuccio’s DFCSI isn’t useless, your Procedure 2 is useless.

    Procedure 1 gives exactly the same answers as Procedure 2. You say Procedure 2 is useless. Therefore, Procedure 1 is also useless.

    Excellent job, PaV. You’re a real asset to the ID team!

  125. keith s:

    As explained, your procedure 2 is the same procedure, and implies the calculation.

    Why do you speak of 600 characters? (a definite complexity threshold)

    Why do you speak of “meaningful in English”? (a definite functional specification)

    You are simply giving my procedure in its final form, without the logical explanations. My compliments!

  126. gpuccio,

    In his #110, PaV says that procedure 2 is useless:

    Gpuccio’s DFCSI isn’t useless, your Procedure 2 is useless.

    You agree wholeheartedly:

    PaV at #110:

    Absolutely correct! Thank you.

    You then tell me that procedure 1 and procedure 2 are the same:

    As explained, your procedure 2 is the same procedure, and implies the calculation.

    You and PaV agree that procedure 2 is useless. You tell me that Procedure 1 is the same as Procedure 2. Therefore, Procedure 1 is useless, according to you.

    Oops.

  127. Ok

    Keith S is confused, but we can’t be certain, even he said so……

  128. The Chinese letter B is written as ‘tt’. If dFSCI is calculated for this letter, wouldn’t it be less than 500 bits? So is it designed or not ?
    A splatter left on wall by a stone falling in a water puddle by gravity or a splatter on wall by stone dropped by a person on puddle would (I guess)have pretty much same dFSCI. How will you distinguish between the two ?
    A man-made crop circle and similar natural crop circle would present the same problem.

  129. keith s:

    You are really trying your worst.

    The meaning is really obvious, and you are not stupid. What should I think?

    The meaning is:

    Procedure 2 is useless as a separate procedure, because it is the same as procedure 1.

    The real useless thing here is your “argument”.

  130. Me_Think:

    Thank you, you are making my argument.

    You cannot distinguish between designed things and non designed things, unless the object exhibits functional complexity.

    Why? Because natural mechanisms, through randomness or necessity, can generate configurations that are functional, but only with low functional complexity.

    That’s why the computation of dFSCI is necessary to reliably infer design.

    Could you please explain that to keith?

  131. Perhaps CSI could be applied to the Voynich manuscript to determine if its designed or not. You would be doing the whole world a favour.

  132. Dionisio, the link was just to the Amazon page for Yockey’s book.

    Information Theory, Evolution, and The Origin of Life
    http://www.amazon.com/gp/aw/d/.....ot_redir=1

  133. #133 mullerpr

    Thank you.

  134. Has KairosFocus been baned from this thread?

  135. Graham2:

    I suppose that there is enough CSI in the Voynich manuscript to easily infer design for it. Even the illustrations would be enough.

    Decrypting the meaning, if there is a meaning, is all another matter. Obviously, we cannot infer design from the meaning, if we are not sure that there is a meaning. If our inference depended only on the possible meaning (which is not the case for that object), we would not infer design unless and until a meaning is found. In the worst case, that would simply be a false negative. As said many times.

  136. #112 Reality

    D: “Why do you want to see a calculation?”

    Because you IDists claim that you can calculate CSI-dFSCI-FSCO/I.

    D: “Is that important to you? Why?”

    To see if you can, and laugh at you when you can’t.

    D: “If an example is given, would you ask for another?”

    Yes.

    D: “If ten examples are provided, would you demand eleven?”

    Provide ten and then we’ll see.

    Thank you for answering my questions.
    Now every reader in this blog can see that you have revealed, very clearly, your own motives for being here in this blog.
    Very probably your comrades and fellow travelers would have answered exactly as you did. Which is exactly what I (and probably others) suspected.

  137. 138

    Hats off to you gpuccio this is a great thread.

    I think a good next step would be to actually use your calculation on the Voynich manuscript.

    I agree that there is enough CSI there to infer design it would be cool however to objectively compare the actual amount in the object with that in the sonnet.

    To do that we would need to move further down the Y axes and look at the arbitrary structure and grammar instead of “good English”. That might be a little more difficult but I believe it’s still doable.

    Peace

  138. #137 follow-up

    Discussions between people with irreconcilable worldview positions turn into senseless arguments that lead nowhere.

    However, apparently they provide some entertainment, like gladiators and lions provided to the public in the Roman coliseum many years ago. That’s why they have clowns in the circus. Perhaps that increases attendance, traffic and ad revenues.

    There’s also a strong argument for allowing this for the sake of the onlookers/lurkers visiting this blog and also to sharpen the ID arguments.

    I don’t quite agree with some of these arguments, but respect the opinions of others.

    🙂

  139. 140

    Dionisio said

    That sounds interesting.

    I say

    Thank you I think it’s way cool too. Right now I come at my calculation in a different way than gpuccio.

    By graphically comparing an actual data string with a scrambled set of the same data. Then I try to quantify the the differences between the two strings.

    You can find the paper that was my inspiration here

    https://www.cs.duke.edu/~conitzer/turingtradeAAMAS09demo.pdf

    peace

  140. fifthmonarchyman:

    “To do that we would need to move further down the Y axes and look at the arbitrary structure and grammar instead of “good English”. That might be a little more difficult but I believe it’s still doable.”

    Yes, I believe it’s doable. It is not my personal priority, however.

    And thank you for the kind words.

  141. 142

    I linked the wrong paper. Here is the one I meant

    http://arxiv.org/pdf/1002.4592.pdf

  142. fifthmonarchyman: For some one familiar with this debate “me thinks it’s a weasel” is loaded with meaning. For an average Joe it might take a whole sonnet to pass the threshold. On the other hand if I were looking at a string of text in Chinese it might take a string the length of a whole play to pass the tet because I would be looking for mere arbitrary structure and grammar as apposed English words.

    In other words, it’s subjective.

    fifthmonarchyman: But even in that case I would be able to give the sequence a real objective value and compare it to strings that were the result of combination of algorithmic and random processes

    If different people give entirely different answers, then it’s not objective.

    fifthmonarchyman: produce an algorithm capable of producing a 600-character English text independently with out smuggling information through the back door.

    Evolutionary algorithms require an interface to an environment of some sort. Turns out that Shakespeare also incorporated information from his cultural environment. For instance, the William Shakespeare algorithm included an extensive dictionary, grammar rules, stock phrases, scansion, personality types, history, and so on.

    mullerpr: Natural processes flowing from the uniformity of classical mechanics … Is that really going to be presented as an analogue for Natural Selection?

    You didn’t ask for an analogue for natural selection, but examples of natural sieves.

  143. Guys:

    As a shameful form of self-promotion, I will try to draw again the attention to the OP and the computation in it.

    To do that, I will repost here what I said in post #51:

    So, if the computation here is correct, a few interesting things ensue:

    1) It is possible to compute the target space, and therefore dFSCI, for specific search spaces by some reasonable, indirect method. Of course, each space should be analyzed with appropriate methods.

    2) Nobody seems to object that he knows some simple algorithm which can write a passage of 600 characters which has good meaning in English. Where are all those objections about how difficult it is to exclude necessity, and about how that generates circularity, and about how that is bound to generate many false positives?

    The balance at present:

    a) Algorithms proposed to explain Shakespeare’s sonnet (or any other passage of the same length in good English): none.

    b) False positives proposed: none.

    c) True positives found: a lot. For example, all the posts in the last thread that were longer than 600 characters (there were a few).

    3) We have a clear example that functional complexity, at least in the language space, is bound to increase hugely with the increase in length of the string. This is IMO an important result, very intuitive, but now we have a mathematical verification. Moreover, while the above reasoning is about language, I believe that it is possible in principle to demonstrate it also for other functional spaces, like software and proteins.

    I would like to spend a few more words on point 1.

    The essence of point 1 is that the computation of a target space can be done by indirect methods, but that we must eagerly look for the best method to do that in each case.

    To those who criticize the approach of Durston and my personal approach to the computation of the target space for functional proteins, I just say: OK, propose your approach. Maybe ti will be better. But there is no reason to deny that an interesting problem exists, that we must look for the best solutions, and that the problem has important implications for the problem of the origin of biological information.

    CSI denialism has no real place in science.

  144. fifthmonarchyman @138,

    I think a good next step would be to actually use your calculation on the Voynich manuscript.
    I agree that there is enough CSI there to infer design it would be cool however to objectively compare the actual amount in the object with that in the sonnet.

    I think that is too ambitious and is not within the realms of dFSCI /CSI , because you don’t have a standard dictionary database of the Voynich script, have no idea of alphabet probability and no way of checking the result. You need to be an Egyptian hieroglyphs expert to even start deciphering a single word.

    To do that we would need to move further down the Y axes

    What do you mean by moving down the Y-Axes?

  145. 146

    hey Zac

    You said,

    If different people give entirely different answers, then it’s not objective.

    I say,

    In one sense I agree with you.

    By objective I mean that my standard is exactly the same for different objects.

    Your standard might be lower or higher on the Y access than mine but it you should be consistent with yourself when it comes to the X axes,

    Hope that makes sense

    peace

  146. 147

    me think said,

    What do you mean by moving down the Y-Axes?

    I say,

    check out comment 81

    peace

  147. gpuccio at#124

    My argument about those two sequences is about their conservation in a complex molecule. You can scarcely deny that those specific sequences are necessary, with that high level of conservation, to the working of ATP synthase in its common form, and especially the form which utilizes H+ gradients.

    The Apicomplexa paper you link describes a very different complex molecule, made of many different protein sequences, and is a complex example of a different engineering solution. In no way it is in contradiction with the functional specification of the sequences I examined in the traditional ATP synthase complex.

    Another beautiful example of the Texas Sharp-Shooter. You were quite satisfied with your specification of the “ATP synthase”, a nice tight cluster of bullet holes in the wall. Then REC points out a separate cluster of bullet holes, the Alveolata ATP synthase. Immediately you re-define your “ATP synthase” as “ATP synthase in its common form” or “the traditional ATP synthase”, and get out some fresh paint for the recently observed bullet holes, which represent “a very different complex molecule, made of many different protein sequences, and is a complex example of a different engineering solution.”

  148. GP, busy — doubly so today — but spotted this; in the Intro-summary IOSE, the Voynich Manuscript is featured. Part of the problem is a confusion that design recognition is a universal decoding process, which obviously is highly dubious on computation theory. Just the drawings as well as the context of being a codex, are enough to show design per manifest FSCO/I. By whom, why, with what possible decoding of the apparent text, are other questions well beyond the core issue of the design inference. Gone, KF

  149. DNA_Jock:

    You know what I think of your “painting” argument.

    With ATP synthase, the problem is different. I chose the alpha and beta subunits of ATP synthase as a good easy example of very high dFSCI, which they are, because they are long sequences with very high conservation and a very clear function in the context of a bigger multi-sequence molecule. As you well know, it is not an isolated example of high functional conservation. I have mentioned also histone H3, which is shorter but even more conserved.

    I have always said clearly that those two sequences are only part of a more complex molecule. The Alveolata ATP synthase is another complex molecule, which uses other sequences. In no way that means that the specific sequence of alpha and beta chains of the common form of ATP synthase is not essential to the functioning of the molecule as it is. If that were not the case, why would those AA positions have been conserved in spite of all possible neutral variation?

    I am not redefining anything. I have always reasoned about the molecular assembly of ATP synthase “in common form”. For the working of that molecular assembly, those two chains are essentially conserved and necessary.

    You must clarify your position: are you denying that it is possible to measure functional complexity, in language as in proteins? Or are you just suggesting a better way to do it?

    If you are in the denialist position, I invite you to explain what is wrong in my reasoning about the Shakespeare Sonnet, and then to provide a false positive, or just explain how is it that a wrong reasoning works so well.

  150. KF:

    Thank you! 🙂

  151. Zachriel:

    “Evolutionary algorithms require an interface to an environment of some sort. Turns out that Shakespeare also incorporated information from his cultural environment. For instance, the William Shakespeare algorithm included an extensive dictionary, grammar rules, stock phrases, scansion, personality types, history, and so on.”

    And William Shakespeare included a fine consciousness and sensibility, and much more, which was well beyond the information available to his “algorithm”.

    I must say that a phrase like “the William Shakespeare algorithm”, used by an intelligent person like you (and, I am sure, purposefully), has a strange effect on me. Not really good.

  152. F/N 2: GP thanks, I snatch another quick pause.

    The confinement to English text alone already builds in a whole apparatus of rules, conventions, and structures that are FSCO/I rich, so the estimations you do will be quire conservative, a severe under estimate. I tend to think physically and so I think in terms of a string register with the possibility of something like zener noise filling it, and that defines the point that any of the 128 ASCII codes can appear, that whether or not that is flat random that is not constrained by the physics at work.

    Thus, that this leads to the situation where the real space of possibilities for a register of length n seven-bit characters, is 128^n. Just 72 ASCI characters would exhaust the resources of the sol system, and 143 those of the observed cosmos, to generate anything more than a very sparse, vanishingly sparse, sample that we only have reason to expect will snapshot the bulk, not special zones such as text in Elizabethan English.

    However the message is still the same, text in such patterns is reflective of such special characteristics that are separately identifiable that we have no good reason to expect that on blind search whether scattershot or random walk, we will reasonably ever produce such. Design routinely produces such, though Shakespeare is anything but routine.

    KF

  153. KF:

    I am well aware that mine is “a severe under estimate”, but it seems good enough to disturb our interlocutors a little! 🙂

    And I agree, absolutely, that Shakespeare is “super design”. As are many exceptional proteins whose biochemical efficiency is overwhelming.

    I think we agree that a design inference does not necessarily imply optimal design. But, when we observe optimal design, it’s simple fairness to recognize it.

    Our heartfelt gratitude, then, to Shakespeare and to all the great designers in this world.

  154. #140 fifthmonarchyman

    Very interesting indeed. Thank you for the link to the PDF document that inspired you to work on that project.

    🙂

  155. Zachriel:

    Evolutionary algorithms require an interface to an environment of some sort.

    Evolutionary algorithms are examples of intelligent design evolution. They don’t have anything to do with unguided evolution.

  156. DNA jock- REC cannot explain any ATP synthase. Unguided evolution is incapable of producing them.

  157. Adapa:

    The purpose of a dFSCI calculation is not to convince anyone in the scientific community of its design detection worth.

    This alleged scientific community doesn’t have any methodology that comes close to being as good as CSI and dFSCI. That means their complaints are just whining.

  158. fifthmonarchyman @ 147

    me think said,
    What do you mean by moving down the Y-Axes?
    I say,check out comment 81

    What you should do is check the entropy.

  159. GP @ 154

    I am well aware that mine is “a severe under estimate”, but it seems good enough to disturb our interlocutors a little! 🙂
    And I agree, absolutely, that Shakespeare is “super design”.

    How is Shakespeare a ‘Super Design’ ?

  160. gpuccio,

    Your #150:

    DNA_Jock[:]
    You know what I think of your “painting” argument.

    I know you don’t like it. From your attempts to refute it, it appears that you don’t understand it.

    With ATP synthase, the problem is different. I chose the alpha and beta subunits of ATP synthase as a good easy example of very high dFSCI, which they are, because they are long sequences with very high conservation and a very clear function in the context of a bigger multi-sequence molecule. As you well know, it is not an isolated example of high functional conservation. I have mentioned also histone H3, which is shorter but even more conserved.
    I have always said clearly that those two sequences are only part of a more complex molecule. The Alveolata ATP synthase is another complex molecule, which uses other sequences. In no way that means that the specific sequence of alpha and beta chains of the common form of ATP synthase is not essential to the functioning of the molecule as it is. If that were not the case, why would those AA positions have been conserved in spite of all possible neutral variation?

    The degree of sequence conservation tells us how tight the peak is at the local optimum. It is rather uninformative about the history of the biochemical activity.

    I am not redefining anything. I have always reasoned about the molecular assembly of ATP synthase “in common form”. For the working of that molecular assembly, those two chains are essentially conserved and necessary.

    Here you show your failure to comprehend the objection. I will try to explain one more time.
    The bullet holes have been in the wall since before any humans existed.
    Along comes John Walker: “Look at this bullet hole I found”. Others find a tight grouping of bullet holes around this one.
    Along comes gpuccio, paints a circle around the bullet holes and calls his circle “the functional specification for ATP synthase”. Does some calculations.
    Along comes Praveen Nina and others, and points to a bullet hole that is a long, long way away from Walker’s tight grouping, but still falls within gpuccio’s original specification “ATP synthase”. His calculations are destroyed.
    In light of Nina et al, gpuccio does two things: 1) He re-draws his circle so that it now excludes Alveolata, and renames the Walker circle “the traditional ATP synthase” 2) he draws a brand-spanking-new circle around the Alveolata bullet hole(s) because it is a “very different complex molecule, made of many different protein sequences, and is a complex example of a different engineering solution”.
    Mischief managed.
    How can you not see that all of your specifications are post-hoc?

    You must clarify your position: are you denying that it is possible to measure functional complexity, in language as in proteins? Or are you just suggesting a better way to do it?

    I am not denying that it is possible, in either case. I think it is rather difficult, in both cases. I note in passing that the two cases are rather different, hence my lack of interest in the original topic of this thread.

    If you are in the denialist position, I invite you to explain what is wrong in my reasoning about the Shakespeare Sonnet, and then to provide a false positive, or just explain how is it that a wrong reasoning works so well.

    I am happy to stipulate that “design” can be detected. The problem with all formulations of ID to date is that “design” can be generated by processes of trial-and-error, irrespective of whether any intelligent intervention occurred.
    There is a strange irony to the fact that one of your objections to Keefe & Szostak is that they chose ATP binding. You chose “ATP synthase”, rather than “APP synthase” or an infinite number of other biochemical activities, because it exists. They, at least, have an excuse.

  161. DNA jock:

    The problem with all formulations of ID to date is that “design” can be generated by processes of trial-and-error, irrespective of whether any intelligent intervention occurred.

    Easier said than demonstrated, of course.

  162. Me_Think:

    “How is Shakespeare a ‘Super Design’ ?”

    It was just a personal appreciation from the heart for the quality of his poetry!

  163. DNA_Jock:

    “I know you don’t like it. From your attempts to refute it, it appears that you don’t understand it.”

    Your opinion. I could simply counter that you don’t understand ID.

    “The degree of sequence conservation tells us how tight the peak is at the local optimum. It is rather uninformative about the history of the biochemical activity.”

    And what is informative about that history? Just to understand.

    “The bullet holes have been in the wall since before any humans existed.”

    As were the biochemical activities. Or am I missing something?

    “Along comes John Walker: “Look at this bullet hole I found”. Others find a tight grouping of bullet holes around this one.
    Along comes gpuccio, paints a circle around the bullet holes and calls his circle “the functional specification for ATP synthase”. Does some calculations.
    Along comes Praveen Nina and others, and points to a bullet hole that is a long, long way away from Walker’s tight grouping, but still falls within gpuccio’s original specification “ATP synthase”. His calculations are destroyed.”

    Absolutely not. What I find is a complex multi chain protein which works in a very brilliant way to generate ATP form a proton gradient. What I find is that this protein, in its rotor part, requires a strong conservation of the sequence of two chains. What I find is that the protein is functional and conserved.

    My calculations are not destroyed. To infer design, what I need is to find specific information linked to a function. I can redefine the function if necessary, but the concept is that any high level of specific information linked to any explicitly defined function is a mark of design. You seem not to understand that, but it is exactly the reason why I can infer design for the Shakespeare sonnet either I define the function as being s sonnet in English, or more generically as being a passage in English. In both cases, the linked information is extremely high, even if not the same.

    You seem to forget that our purpose in measuring dFSCI is simply to detect design. I detect design in the sonnet, and I am right. You cannot give a false positive, because my definition of a context which guarantees a correct design inference is right.

    The same is true for ATP synthase. Nobody can deny the high level of specified information which is necessary for the protein to work in that form. As I have said many times, if many other sequences could be enough for the protein to work, neutral variation would have found many of them. It hasn’t.

    The Alveolata protein is another machine, made with different components. Its complexity is probably comparable to the complexity of the traditional protein, but it is another molecule. That’s why it uses other chains, which are different from the chains in the traditional molecule.

    So, let’s say that we have two very different cars, say a small Ford and a Ferrari. They have different carburettors. You cannot mount the Ferrari carburettor in the Ford, and probably they look very different. So you say: “see, they have the same function, but they are very different. That proves that it is very easy to implement the function, any carburettor will do.”

    No, The Ferrari carburettor is different and specific. As different and specific are the chains in traditional ATP synthase. How do we know that they are specific? Because they are extremely conserved.

    So, all you arguments about painting and post-specification are simply wrong. You obfuscate, certainly in good faith, but you obfuscate just the same.

    How can you not see that a phrase like:

    “The bullet holes have been in the wall since before any humans existed.”

    is simply obfuscation?

    If you are saying that the proteins were there, but they did nothing, and started to work as soon as we looked at them, have the courage to say so. It would be a strange application of quantum mechanics to biology, but at least it would be consistent.

    No. The proteins were there, and they did exactly what they do now. The bullet holes and the targets were there since before any humans existed.

    Your obfuscation is that you try to confound methodological problems which legitimately arise when we try to scientifically describe both the bulletholes and the targets with a false argument that we are painting the targets from scratch, and the only purpose of this attitude seems to be to discredit a perfectly valid scientific post specification as though it were a logical fallacy, as though any post specification were a fallacy. Which is simply not true.

    You say that I don’t like your argument. It’s true. I don’t like it, because it is wrong and unscientific.

  164. Keith s,
    The more you argue, the less I think of your comprehension skills. Which is why I always stop arguing with you eventually. One last try:

    You get exactly the same answer whether or not you do the calculation, in 100% of the cases

    Exactly…wrong. My “sky is blue” example should have been sufficient, but here’s a longer explanation:
    Assume we are trying to detect design in english phrases. We have a computer that is generating a single random phrase, and a person writing a single meaningful sentence. Can we detect which produced the following (e.g. which is designed)?
    1) “I” – english word (function); People will agree it could be the computer; dFSCI says unknown; may or may not be designed
    2) “SKY IS BLUE” – english phrase (function); looks design-y; people will disagree whether a computer could have kicked it out; dFSCI says unknown; may or may not be designed
    3) 600-word Shakespearean sonnet (function); looks design-y; some People will disagree whether a computer could have kicked it out (hard as that may be to believe); dFSCI says MUST BE DESIGNED; must be designed (human wrote it).

    You’re getting hung up because we’re discussing easily-recognizable “designed” objects (words, machines, etc.), where “common sense” leads almost everyone to agree on the answer. The whole point of trying to come up with a valid calculation is so that we can use it on functional things that aren’t human-made and therefore not easily recognizable – life being one of those.
    1) ATP-Synthase/PCD/Flagellum – has function, looks design-y
    2) People will disagree whether it was intelligently designed
    3) Perform dFSCI calculation
    4) Calculation shows that it must be designed
    5) People will disagree whether dFSCI is a valid calculation

    Regardless of point 5, your objections that you get the same answer whether or not you perform the calculation (see points 2 and 3 of my example) is flat wrong, and your objection that the calculation is irrelevant is therefore also wrong.

  165. gpuccio,

    I have an objection that I hope you will find on-topic. The objection is that I’m not sure that English phrases (or any written form of communication), independent of the “information” they convey, are a valid test of the dFSCI.

    My “false positive” example is a lottery drawing. Imagine for a moment that you have a lottery that consists of 50 numbers from 1-1000. Your Target Space is 1 (winning number), and your Search Space is approximately 2^500 (obviously, no lotto would do this because no one would ever win – but it could happen). The winning number conveys information – “What is the winning number to the Super-Stupid Lotto!”, and a dFSCI calculation is 500bits. So dFSCI would say “Yes, designed”, when the winning number was just randomly produced.

    Is the flaw in my logic that, since a human had to “pick” the winning number, it is “designed”? I’m curious where this argument breaks down.

    (My objection would be that the information conveyed (winning lotto #) comes in at less than 500 bits, even though the method of conveying it (50 3-digit numbers) comes in at more. Unfortunately, that would seem to invalidate using the symbology as a valid test, and makes the true calculation difficult(impossible?)).

  166. drc466:

    I am not sure that I understand your example.

    “Imagine for a moment that you have a lottery that consists of 50 numbers from 1-1000.”

    What do you mean exactly? How is the lottery structured?

    “Your Target Space is 1 (winning number),”

    What do you mean? What is the object conveying the information? Or about which you are trying to make the design inference?

    “and your Search Space is approximately 2^500 (obviously, no lotto would do this because no one would ever win – but it could happen).”

    Do you mean that there are 2^500 tickets? And one is extracted? But you said “50 numbers from 1-1000”. Please, explain better.

    “The winning number conveys information – “What is the winning number to the Super-Stupid Lotto!”,”

    So, let’s say that your object is a paper with the winning number?

    ” and a dFSCI calculation is 500bits.”

    In what sense? That is true only if you define a random system as a method to preview (or guess) the winning number. Obtaining any pre-specified number out of 2^500 by a random search is indeed almost magic.

    So, let’s say that you have a random number generator which gives you a number in one attempt, and you say: this number tomorrow will win the lottery. And then it happens. Many would be suspicious…

    Perhaps I understand what you mean. In a miraculous pre-announcement of the winning number, the unexplained dFSCI is not in the number itself (which is a simple piece of information), but in the system which chooses it as the future winner. The dFSCI is in the system.

    So, the two hypotheses are:

    a) You and the system you use have been extremely lucky (but try to convince the judges)

    b) The system is designed (IOWs, you fixed the lottery so that you could announce in advance the winner).

    The design here is not in the number, but in the system. It is not the number itself, or its sequence, which brings the information.

    Is that what you meant?

  167. “The degree of sequence conservation tells us how tight the peak is at the local optimum. It is rather uninformative about the history of the biochemical activity.”

    And what is informative about that history? Just to understand.

    Bottom-up studies, such as Keefe and related work. Sadly, you have some strange ideological resistance to these studies, perhaps related to the results they provide.

    The same is true for ATP synthase. Nobody can deny the high level of specified information which is necessary for the protein to work in that form.

    The issue is with your inclusion of the word “specified” here.

    As I have said many times, if many other sequences could be enough for the protein to work, neutral variation would have found many of them. It hasn’t.

    NOOOOO. This is terribly wrong, and perhaps at the root of your inability to see the problem. Neutral variation will explore the width of that one local optimum. The extent of neutral variation says NOTHING about whether there are other local peaks, either nearby, or far away (Nina’s bullet holes), or whether there is an even higher peak in the region (see REC’s citations on Rubisco).

    How can you not see that a phrase like:
    “The bullet holes have been in the wall since before any humans existed.”
    is simply obfuscation?
    If you are saying that the proteins were there, but they did nothing, and started to work as soon as we looked at them, have the courage to say so.

    How can you not see that in this analogy the bullet hole represents the biochemical activity?

    No. The proteins were there, and they did exactly what they do now. The bullet holes and the targets were there since before any humans existed.

    Here we see the fallacy in its distilled form. Your first sentence is correct. The bullet holes have been there since before humans existed. The PAINT is the human artefact. And how you paint the circles depends on which bullet holes you have discovered to date. As you have inadvertently demonstrated on this thread. Yes ATP was getting synthesized before humans existed, but the specification “ATP synthase” was generated by humans AFTER the biochemical activity was delineated. And re-defined by you in the light of Nina’s work.

    the only purpose of this attitude seems to be to discredit a perfectly valid scientific post specification as though it were a logical fallacy, as though any post specification were a fallacy. Which is simply not true.

    Well, I have yet to see an IDist come up with a post-specification that wasn’t a fallacy. Let’s just say that you have to be really, really, really cautious if you are applying a post-facto specification to an event that you have already observed, and then trying to calculate how unlikely that specific event was. You can make the probability arbitrarily small by making the specification arbitrarily precise.

  168. DNA_Jock:

    “NOOOOO. This is terribly wrong, and perhaps at the root of your inability to see the problem. Neutral variation will explore the width of that one local optimum. The extent of neutral variation says NOTHING about whether there are other local peaks, either nearby, or far away (Nina’s bullet holes), or whether there is an even higher peak in the region (see REC’s citations on Rubisco).”

    Are you really saying that if I have what you call “a local optimum” which has functional specificity of 1600 bits, and there are a few other distant local optimums for the same function (distant, because they are never found by variation form our first local optimum in billions of years), something changes? How many “local optimums” for ATP synthases do you imagine exist? 2^1000? Is that what you are suggesting?

    “How can you not see that in this analogy the bullet hole represents the biochemical activity?”

    Strange. In my discourse, like in all the discourses about the fallacy, the bullet hole is the result of a random act, and the target gives meaning to it (see also Dembski).

    So, excuse me, but the bullet hole is some sequence we observe, and the biochemical activity is the target. IOWs, the bullet hole of variation has hit the target of the biochemical activity.

    The biochemical activity is a function. I can define it in different ways. No problem. The point is that if I need a lot of specific bits to implement that function, that function is complex.

    But there are other complex functions. And so?

    As I have said, there are sonnets in many languages. Does that invalidate my design inference for a sonnet in English? If it is so, why nobody can provide a false positive to the target I painted on the sonnet?

    “Here we see the fallacy in its distilled form. Your first sentence is correct. The bullet holes have been there since before humans existed. The PAINT is the human artefact. And how you paint the circles depends on which bullet holes you have discovered to date. As you have inadvertently demonstrated on this thread. Yes ATP was getting synthesized before humans existed, but the specification “ATP synthase” was generated by humans AFTER the biochemical activity was delineated. And re-defined by you in the light of Nina’s work.”

    And all that is completely irrelevant. My point is that any specification which is complex enough is a marker of design. It’s you who do not understand my point.

    “Well, I have yet to see an IDist come up with a post-specification that wasn’t a fallacy. Let’s just say that you have to be really, really, really cautious if you are applying a post-facto specification to an event that you have already observed, and then trying to calculate how unlikely that specific event was. You can make the probability arbitrarily small by making the specification arbitrarily precise.”

    What do you mean? My specification of the Shakespeare sonnet is a post-specification. If it is a fallacy, how is it that it works so well?

    There is no way to make a function for a sequence “arbitrarily precise”, if I keep the functional specification independent from the sequence itself. OWs, when I specify the sonnet as a passage with good meaning in English, I am not saying “any sonnet with the following characters in this order”. I am defining a partition which is independent from the specific character sequence. Even the reference to a language, English, has nothing to do with the specific characters in the sequence. Those same characters can be used in many other languages, or in any string without any meaning.

    The reference to meaning is a direct reference to a conscious experience, and the reference to English is a reference which is independent from the system of a random character generator. Therefore, my specification works.

    In the same way, the sequence of nucleotides in a protein coding gene, as transformed by RV, is completely independent from function and from the protein space. So, there is no way that I can narrow my definitions so that I can make any results of a random search more likely.

    As said many times, NS is another matter. As there is no algorithm which can explain a complex sonnet, there is no algorithm which can explain a complex function. But that is another part of the reasoning.

  169. Amazing to the materialist, nothing can do anything and everything…. Search spaces, decide on the best solution, reverse engineer, problem solve, build things, create CSI. Nothing is truely a miracle worker it can do everything.

    All praise nothing!!!

  170. gpuccio

    As said many times, NS is another matter. As there is no algorithm which can explain a complex sonnet, there is no algorithm which can explain a complex function. But that is another part of the reasoning

    Amazing that you’ve never heard of fractals or the Mandelbrot set. There is even evidence that the early multicellular life forms in the Ediacaran grew with a fractal format.

    Fractal branching organizations of Ediacaran rangeomorph fronds reveal a lost Proterozoic body plan
    Cuthill, Morris
    PNAS September 9, 2014 vol. 111 no. 36

    Summary: Rangeomorph fronds characterize the late Ediacaran Period (575–541 Ma), representing some of the earliest large organisms. As such, they offer key insights into the early evolution of multicellular eukaryotes. However, their extraordinary branching morphology differs from all other organisms and has proved highly enigmatic. Here we provide a unified mathematical model of rangeomorph branching, allowing us to reconstruct 3D morphologies of 11 taxa and measure their functional properties. This reveals an adaptive radiation of fractal morphologies which maximized body surface area, consistent with diffusive nutrient uptake (osmotrophy). Rangeomorphs were adaptively optimal for the low-competition, high-nutrient conditions of Ediacaran oceans. With the Cambrian explosion in animal diversity (from 541 Ma), fundamental changes in ecological and geochemical conditions led to their extinction.

    Simple iterative processes that produce great complexity (and gobs of CSI / dFSCI / FIASCO). Whoda thunk? 🙂

  171. gpuccio: And William Shakespeare included a fine consciousness and sensibility, and much more, which was well beyond the information available to his “algorithm”.

    In any case, William Shakespeare had access to a huge amount of preexisting data, such as a mental dictionary, not to mention a huge amount of experience in Elizabethan society. But you want an algorithm to come up with sonnets without any input as to what sells theater tickets.

  172. gpuccio, what is your opinion that a greater degree of CSI must be present before an ever increasing amount of CSI/dFCSI can be produced?

  173. Gpuccio,

    “NOOOOO. This is terribly wrong, and perhaps at the root of your inability to see the problem. Neutral variation will explore the width of that one local optimum. The extent of neutral variation says NOTHING about whether there are other local peaks, either nearby, or far away (Nina’s bullet holes), or whether there is an even higher peak in the region (see REC’s citations on Rubisco).”

    Are you really saying that if I have what you call “a local optimum” which has functional specificity of 1600 bits, and there are a few other distant local optimums for the same function (distant, because they are never found by variation form our first local optimum in billions of years), something changes? How many “local optimums” for ATP synthases do you imagine exist? 2^1000? Is that what you are suggesting?

    Well your definition of “distant” is wrong. Tough to say which local optima have never been found, when all we have to go on is the ones that survived. 2^1000 seems possible, but the number could be a lot higher.

    “How can you not see that in this analogy the bullet hole represents the biochemical activity?”

    Strange. In my discourse, like in all the discourses about the fallacy, the bullet hole is the result of a random act, and the target gives meaning to it (see also Dembski).

    And this discourse is no different. 🙂
    Is it beginning to dawn on you yet?

    So, excuse me, but the bullet hole is some sequence we observe, and the biochemical activity is the target. IOWs, the bullet hole of variation has hit the target of the biochemical activity.

    No. You seem unable to grasp the difference between the biochemical activity (which predates humans) and the specification, which is the human paintjob.

    My point is that any specification which is complex enough is a marker of design. It’s you who do not understand my point.

    Well I agree with you that any specification which is complex enough is a marker of intelligence. But you are trying to claim that an object that meets a sufficiently complex specification must be designed. When the specification is written post-hoc, that is just plain silly.

    There is no way to make a function for a sequence “arbitrarily precise”, if I keep the functional specification independent from the sequence itself. [I]OWs, when I specify the sonnet as a passage with good meaning in English, I am not saying “any sonnet with the following characters in this order”

    I retained your bit about sonnets here, since it helps clarify your intended meaning. You are claiming that, so long as I stay away from specifying the protein sequence, there is no way for me to make the specification for ATP synthase arbitrarily precise. Here goes:
    ATP synthase having
    Km for Mg.ATP between 0.9e-4 and 1.1e-4
    Ki for ADP between 2.8e-4 and 3.1e-4
    Ks for Mg2+ having the following pH dependence:
    pH Ks
    7.2 1e-4
    7.3 0.9e-4
    7.4 0.6e-4
    7.5 0.4e-4
    7.6 0.2e-4
    These values at 25 C in 0.1M KCl.
    At 0.11M KCL, the values should be……should I go on?
    Or I could add some stuff about the rate at which Mg2+ and ADP cause the inactivation of the enzyme
    Or temperature-dependence.
    I haven’t even mentioned the k.cat ‘s
    The simple fact of the matter is that you, personally, have been caught re-writing your specification in order to retain the “specialness“ of what you now term the “traditional ATP” synthase.

  174. Zachriel,

    No one expects the algorithm to write a sonnet. That’s the point. A sonnet is an act of intelligence. No one expects an algorithm to randomly generate a sonnet any more than any one expects a solar powered muddy bog to randomly generate much more complicated life.

  175. DNA_Job:

    Now I have a little more time, and I can answer you better.

    Let’s try this way.

    You say:

    “How can you not see that in this analogy the bullet hole represents the biochemical activity?”

    And:

    “You can make the probability arbitrarily small by making the specification arbitrarily precise.”

    And you use those concepts, and others, to criticize any post-specification as a logical fallacy.

    Now, to remain concrete, let’s apply these concepts to my sonnet example.

    Let’s take a Shakespeare sonnet of about 600 characters, which I find somewhere, and which I don’t know in advance, and don’t know is Shakespeare’s. Let’s say that I don’t know anything about it, and that for what I know of its origin it could be a random string.

    Now, the sonnet, being indeed Shakespeare’s, existed before my arrive. I am sorry to disappoint some of my readers, but I am not so old.

    Therefore, any consideration I can make on the sonnet is a post-specification.

    Now, I observe three things:

    a) The sequence has a good meaning in English, which I perfectly understand (and which I immediately like, but this is not relevant).

    b) It is, indeed, an English composition in rhymed verse.

    c) It is, indeed, a sonnet (specific verse structure).

    Now, I take each of these things as specification, in turn, and compute the dFSCI accordingly. So, we have three different post-specifications, and three different computations.

    For the first specification, I obtain s functional information of at least 673 bits (I am accepting Roy’s proposal), certainly vastly underestimated.

    Now, I don’t want to delve into the target space of rhymed verse and of sonnets, so let’s just imagine the other two results. It will be enough, for my reasoning. We have already ascertained that there can be ways to compute those numbers indirectly, at least a lower thresholds of complexity.

    I think we can agree that the target space for b) is smaller than for a) , and for c) it is smaller than for b).

    So, let’s say that b) has a lower threshold of complexity of 1000 bits, and c) of 1500. Just to discuss.

    So, according to a general UPB of 500 bits, and being aware of no algorithm (especially non designed) which can write sonnets any more than English text, I can safely infer design for the object, according to my procedure, with all three different analyses.

    OK.

    Now, your concepts. According to your views, none of the three specifications is valid. All of them are post-specifications.

    Moreover, you say that the sonnet itself with its functionalities is the bullethole. OK.

    So, when I arrive and say: “This is a passage with good meaning in English” I am painting an arbitrary target around the object. Is that your idea?

    There is more. When I say: this is an English composition in rhymed verse, according to your concepts, I am again painting an arbitrary target, only this time I am probably trying to “make the probability of the object arbitrarily small by making the specification arbitrarily precise”. A big fallacy, indeed.

    But I am not satisfied. So I pass to c). Again, I am painting an arbitrary target, and again I am trying to “make the probability of the object arbitrarily small by making the specification arbitrarily precise”. What a devious thinker I am!

    Now, we have a problem. Never satisfied, I still want to go on in “making the probability of the object arbitrarily small by making the specification arbitrarily precise”. But I cannot use the real bits in the object, because you have already warned me that, if I do that, I am doomed. And even I, the treacherous pseudo-scientist, know that there are limits that are best left alone.

    So, I am rather at an impasse. Without using the specifics of the sequence (what rhymes it contains, how many vowels, and so on), it becomes difficult. OK, I have probably one or two options left. I could define the verse (iambic pentameter?). Maybe something else. But how long can I make the probability arbitrarily small by making the specification arbitrarily precise?

    The point is, up to now I have only described in my specifications real properties of the sonnet. I have invented nothing. OK, I have used different levels of detail, but each one of them was correct. From now on, I should probably invent things that are not there.

    I don’t really feel that I am the “arbiter” of this situation!

    OK, maybe I will be satisfied with my triple and correct design inference. After all, you will criticize me anyway! 🙂

    Ah, and I must really have been born lucky. Nobody has still offered any false positive to my fallacious procedure completely based on post-specifications.

  176. Zachriel:

    “In any case, William Shakespeare had access to a huge amount of preexisting data, such as a mental dictionary, not to mention a huge amount of experience in Elizabethan society. But you want an algorithm to come up with sonnets without any input as to what sells theater tickets.”

    The only thing I want is to infer that original sonnets are generated by conscious beings, and not by algorithms.

  177. computerist:

    “gpuccio, what is your opinion that a greater degree of CSI must be present before an ever increasing amount of CSI/dFCSI can be produced?”

    I am not sure what you mean. Can you explain better? Thank you.

  178. gpuccio,

    Yeah, I’m not entirely sure my example makes sense, so explaining it is a bit difficult. Let me try again.

    So, taking our hypothetical lotto of 50 numbers from 1 to 1000, we use a random # generator to generate the single winning number:
    001 050 888 273 652 … 763 299 055 (50 total #’s)
    Now admittedly this is no Shakespearean sonnet, but it does have meaning, or function, or whatever – it is now the winning number to a lottery.
    To determine whether this Lottorean sonnet was designed, we calculate the target space and search space.
    Target Space: 1 (there are no other numbers/sequences that will win)
    Search Space: 1000 ^ 50, which is approx 2^10^50 or 2^500.
    Calculation: 2^0 / 2^500 = 2^-500, or 500 bits.

    So the question becomes: is this a false positive because the 50 #’s were randomly chosen and not “designed”, per se – or is this a valid positive because the Lotto was designed, and it is only the Lotto that gives meaning to the number sequence – or is this just a really bad example that fails either way?

    I’m thinking that the 2nd answer is correct, but I’m struggling with the rationalization somewhat. Hoping you or someone can help. If it still doesn’t make sense, don’t worry about it, it’s not hugely important.

  179. gpuccio: The only thing I want is to infer that original sonnets are generated by conscious beings, and not by algorithms.

    You haven’t inferred it, but intuited it, while providing substantially different conditions for Shakespeare and an algorithm in terms of background information.

  180. Moreover, if positive NS had had some role in generating the biological functional information, we should see tons of traces of naturally selectable functional intermediates in the proteome. We don’t.

    H’uh? Why would you expect this, and why in the proteome in particular?

  181. I should add, there is abundant evidence for the role of positive natural selection in protein evolution. Ka/Ks (=dN/dS) ratios being the classic example, but there are many more methods to detect such.

  182. gputio,

    As I commented at 161, “I note in passing that the two cases [i.e. proteins and sonnets] are rather different, hence my lack of interest in the original topic of this thread.”
    But you seen like a nice, if misguided, guy, so I’ll play along.

    It is your choice of specifications that is arbitrary.
    Why on earth did you not test to see if it was a limerick? Or an composition by an XX-year old student from state YY? (500 alternative specifications right there). There are millions of different specifications against which you could test your sonnet. You chose “sonnet”, after the fact, because it looked like a sonnet. This is fine if all you are interested in is whether it is a sonnet or not. Of course, in THAT case, the math becomes superfluous. I believe this has been pointed out to you. But if you wish to calculate the probability of some other process leading to text that meets your specification, then the choice of specification matters. As you demonstrate in your response above. A > B >> C. For statistical purposes, ALL post-hoc specifications are suspect. That is why the FDA and EMA, for example, do not allow them.

    For some strange reason it amuses me that your specifications do not nest. Not that it matters one iota (since I don’t buy the analogy anyway), but Jabberwocky fails A, but meets B. Maybe its just the fuzziness of specification A that cracks me up: “a good meaning in English” Say what?

  183. 184

    Adapa said,

    Amazing that you’ve never heard of fractals or the Mandelbrot set. There is even evidence that the early multicellular life forms in the Ediacaran grew with a fractal format.

    I say,

    Funny you should bring up fractals. I have spent a lot of time thinking about fractals and how they relate to CSI.

    Here is a paper that argues that true fractals can not even exist in nature. Check it out

    http://www.academia.edu/703089....._of_nature

    Peace

  184. 185

    Zac said,

    You haven’t inferred it, but intuited it, while providing substantially different conditions for Shakespeare and an algorithm in terms of background information.

    I say,

    This is a good point. Exactly how much of a sonnet is new CSI and how much is borrowed from the background is a great question.

    However I’m pretty sure that most folks would say that at least a small amount of Shakespeare’s work was original as apposed borrowed from his environment.

    Do you agree with this statement?

    I think there is a way in principle to determine if there is anything original in the particular sequences I’m messing around with.

    I isolate the sequence completely from it’s context and look at it as just a series of numerical values.

    If an algorithm can duplicate the pattern by any means whatsoever as long as it is independent of the source string then I discount the originality of the string.

    It seems to be working so far

    Peace

  185. keiths, regarding gpuccio’s English language test procedure:

    You get exactly the same answer whether or not you do the calculation, in 100% of the cases. Why waste time on a calculation that adds no value whatsoever?

    drc466:

    Exactly…wrong. My “sky is blue” example should have been sufficient, but here’s a longer explanation:

    Your “sky is blue” example flunks step 1 of the procedure:

    1. Look at a comment longer than 600 characters.

    Next!

  186. DNA_Jock:

    “ALL post-hoc specifications are suspect.”

    Except when they work perfectly.

    Design detection is based on the identification of extremely small target spaces. That’s what makes the specification empirically perfectly valid, exactly the same reason why the second law of thermodinamics works, and you never observe ordered states in a gas configuration.

    IOWs, we are not trying to sell a drug at the 0.05 threshold of alpha error. I am afraid that you are completely missing the point.

    And it’s not that I test the sonnet for being a sonnet. I observe that it is a sonnet, and I test how likely it is to have a sonnet of that length in a random system.

    The example of the limerick is he same as saying that I should consider also the probability of chinese poems. As I have explained, at those levels of improbability those considerations are simply irrelevant.

    IOWs, where is the false positive?

    My statement has always been simple: the procedure works empirically, as it is, with 100% specificity. In spite of all your attempts to prove differently.

    Then, if your point is simply to say that the space of proteins is different from the space of language, that is another discussion, which we have already done and that we certainly will do again. But it has nothing to do with logical fallacies, painting targets, and making the probability arbitrarily small. IOWs with the methodology. IOWs, with all the wrong arguments that you have attempted against the general procedure.

  187. fifthmonarchyman:

    Very interesting. Keep us updated.

    I think that algorithms are extremely limited in power. They cannot generate anything really original, because they have no awareness of either meaning or purpose.

    Their great power is simply computational. In that, they can operate miracles. But computation is a deductive activity and, even if supported by external inputs, can never understand anything which was not coded in its programs, or conceive of any original function which is not implicit in its premises.

    Penrose’s argument is very powerful on those points. And this paper is very interesting too:

    http://www.blythinstitute.org/.....tlett1.pdf

  188. wd400:

    “H’uh? Why would you expect this, and why in the proteome in particular?”

    Because each intermediate which is positively selected expands in a population. How can you explain that thousands or millions of expanded functional intermediates have left no trace in the proteome?

  189. keith s

    Your “sky is blue” example flunks step 1 of the procedure:

    1. Look at a comment longer than 600 characters.

    Next!

    Ah. So your logic only holds for more than 600 characters, then. So, you’re dismissing an entire process based on the specific circumstance of “length GE 600”. Would you like to then also provide us the correct logic for 599 characters? 437 characters? 53 characters? If gpuccio had chosen a different, shorter length, would you have had the same objection? Or do you have something specifically against the number “600”? Are you admitting that gpuccio’s calculation would be a valid exercise for length = 10 (“sky is blue”)?

  190. drc466:

    No. It is not a false positive. It is not a positive at all.

    In all cases where we use the sequence to specify itself, no post-specification is valid. I have just discussed these things with DNA_Jock.

    The sequence in this case bears no functional information: it is simply extracted. After the extraction, that sequence becomes “the ticket which wins the lottery”. But any random sequence extracted would have become that. So, the sequence has no functional specificity.

    I will try to be more clear with another example. Let’s say that I generate a long random sequence, and after that I set it as my safe’s password. Again, we have no functional information here in the origin of the sequence. Any random sequence can be used as password, so the probability of generating a random sequence which we can after use as password is 1.

    The functional specification must be given independently, not using the real bits of the sequence after it has been generated.

    This is the same error which was made by Mark Frank, when he tried to offer a false positive. Of course, I would never make a design inference for a random number which has been generated randomly and after has been used to specify a function which had no relationship with its sequence before.

  191. I guess you don’t mean the proteome as it’s usally used, to me the set of proteins in a cell, organism or species. But maybe all the proteins that exist?

    In any case, you wouldn’t expect to see intermediates if there were a set paths from A -> B -> … -> X, because intermediates will be replaced by more favoured variants. The branching nature of evolutionary process creates gaps in extant species/proteins/genes.

  192. Guys, I am really happy. When I wrote this OP, my main worry was if my computation was correct. After almost 200 posts, it seems that nobody has found any real error in it (except for an absolutely due correction of a material error). That’s good.

    At least, next time someone makes the old criticism: “you have never really calculated dFSCI”, I can link this thread. 🙂

  193. wd400:

    Yes, I meant the general proteome.

    I expected your answer. It is the same that someone (maybe Joe Felsenstein) gave me time ago, at TSZ I think: “they have been eaten!”

    I find this answer completely unsatisfying (and this is an euphemism). The point is: no trace at all? I can accept gaps, but not such a universality of gaps.

    Remember, I am speaking of the basic functional structures, folds, superfamilies, families. In a world where the alpha and beta chain of ATP synthase remains so conserved for billions of years, and still so many intelligent neo darwinists doubt of its specific functionality, it’s strange to believe that thousands of necessary functional intermediates, each of which contributed to the process of NS by being positively expanded, IOWs being at least for some time the winner, left no trace at all in natural hystory.

  194. 195

    Hey gpuccio,

    I have been devouring the paper you linked since you shared it with Zac. It is very interesting.

    I agree with your conclusions about the limitations of algorithms. I think Penrose’s argument has yet to peculate down to Darwinists.

    In fact I’m not sure the majority of critics have fully grasped that Darwinian evolution is simply an algorithm and is fully subject to any and all the mathematical limitations thereof.

    I will again share my tee shirt equation

    CSI=NCF

    in plain English

    Complex specified information is not computable.

    How cool is that?

    peace

  195. gpuccio:
    At least, next time someone makes the old criticism: “you have never really calculated dFSCI”, I can link this thread. 🙂

    But then you have the problem of explaining why the number you calculated isn’t completely useless. 🙁

  196. FMM:

    I think Penrose’s argument has yet to peculate down to Darwinists.

    I think you meant ‘percolate’.

  197. gpuccio @ 176

    So, according to a general UPB of 500 bits, and being aware of no algorithm (especially non designed) which can write sonnets any more than English text, I can safely infer design for the object,

    If you also have to check for algorithms which can write sonnets, why bother with dFSCI calculations? You could see if you are aware of algorithms which can produce sonnet or whatever you are examining and if there are none, you can infer design. Why calculate dFSCI ?

  198. gpuccio:

    In all cases where we use the sequence to specify itself, no post-specification is valid.

    Sounds like a new rule that you didn’t include in your original procedure. It’s a longstanding bad habit of yours to keep changing your argument in the middle of discussion without acknowledging that you are changing it.

    Also, drc466 need not use the sequence to specify itself. He can prespecify the target as “the winning numbers for this lottery, whatever they turn out to be.” You know the size of the target, and you know the size of the search space. The ratio is tiny. You’ll get a false positive.

  199. fifthmonarchyman @ 195

    In fact I’m not sure the majority of critics have fully grasped that Darwinian evolution is simply an algorithm and is fully subject to any and all the mathematical limitations thereof.

    I don’t think Nature cares if a person says it is restricted to do something because he has a flawed algorithm that restricts it from doing something.

  200. 201

    Keiths,

    Spell checking is something that algorithms are quite good at so my poor spelling is actually evidence that I’m not an algorithm 😉

    Me think said

    If you also have to check for algorithms which can write sonnets, why bother with dFSCI calculations?

    I say,

    We we are checking for algorithms we are testing the claim that CSI/dFSCI is not computable

    If it is not computable in the case of sonnets we can be assured it is not computable in the case of other things like proteins

    get it

    peace

  201. gpuccio, I just don’t understand what you’re trying to prove. All I see is you claiming that some English text that is obviously designed or already known to be designed is designed. How does that demonstrate that IDists can calculate, measure, or compute (Which is the correct term?) CSI, dFSCI, FSCO/I, or FIASCO, and can verify the intelligent design in or of things that are not obviously designed and not known to be designed? And how does what you’re doing establish that CSI, and dFSCI, and FSCO/I are anything other than superficial labels?

    In regard to English text, what can you tell me about the text below? Is it a sonnet, or what? Does it have meaning? Does it have good meaning? If it has meaning or good meaning, what is it? Was it generated by a conscious being, or by an algorithm? How much CSI, and dFSCI, and FSCO/I does it have? Show your work.

    O me, and in the mountain tops with white
    After you want to render more than the zero
    Counterfeit: o thou media love and bullets
    She keeps thee, nor out the very dungeon

    Their end. O you were but though in the dead,
    Even there is best is the marriage of thee
    Brass eternal numbers visual trust of ships
    Masonry, at the perfumed left. Pity

    The other place with vilest worms, or wealth
    Brings. When my love looks be vile world outside
    Newspaper. And this sin they left me first last
    Created; that the vulgar paper tomorrow blooms

    More rich in a several plot, either by guile
    Addition me, have some good thoughts today
    Other give the ear confounds him, deliver’d
    From hands to be well gently bill, and wilt

    Is’t but what need’st thou art as a devil
    To your poem life, being both moon will be dark
    Thy beauty’s rose looks fair imperfect shade,
    ‘you, thou belied with cut from limits far behind

    Look strange shadows doth live. Why didst thou before
    Was true your self cut out the orient when sick
    As newspaper taught of this madding fever!
    Love’s picture then in happy are but never blue

    No leisure gave eyes against original lie
    Far a greater the injuries that which dies
    Wit, since sweets dost deceive and where is bent
    My mind can be so, as soon to dote. If.

    Which, will be thy noon: ah! Let makes up remembrance
    What silent thought itself so, for every one
    Eye an adjunct pleasure unit inconstant
    Stay makes summer’s distillation left me in tears

    Lambs might think the rich in his thoughts
    Might think my sovereign, even so gazed upon
    On a form and bring forth quickly in night
    Her account I not from this title is ending

    My bewailed guilt should example where cast
    Beauty’s brow; and by unions married to frogs
    Kiss the vulgar paper to speak, and wail
    Thee, and hang on her wish sensibility green

  202. FMM,

    Spell checking is not the problem. ‘Peculate’ is a word, it’s just not the right word.

  203. 204

    Me think said,

    I don’t think Nature cares if a person says it is restricted to do something because he has a flawed algorithm that restricts it from doing something.

    I say,

    It’s not about a particular algorithm. It is about the inherent limitations of all algorithms.

    There are some things that algorithms simply can not do by definition. Surely you understand this.

    peace

  204. Who doubts the “specific functionality” of ATP-ase. Of course proteins are confined to a small portion of the space of all sequences — I can’t imagine an evolutionary biologist who would disagree.

    As for the rest. This is the croco-duck mistake exported to proteins.

  205. drc466:

    Ah. So your logic only holds for more than 600 characters, then.

    That was gpuccio’s stipulation, not mine. Take it up with him if you don’t like it.

    Are you admitting that gpuccio’s calculation would be a valid exercise for length = 10 (“sky is blue”)?

    No, gpuccio’s calculation is useless for any length, because it answers the wrong question: “Could this sequence have arisen by pure random variation?”

    The correct question is “Could this sequence have been produced by random variation plus selection, or some other ‘material mechanism’?”

    (I’m giving gpuccio a pass on the fact that there is always an algorithm that can produce any finite sequence, regardless of what it is. He’s having enough trouble defending dFSCI as it is.)

  206. I should qualify that. The second question isn’t really the correct question either, because it assumes a specific target, but it was the question that Dembski was trying to answer with CSI.

    Gpuccio has taken a fatally flawed concept — CSI — and made it even worse.

  207. fifthmonarchyman @ 201,

    We we are checking for algorithms we are testing the claim that CSI/dFSCI is not computable

    If it is not computable in the case of sonnets we can be assured it is not computable in the case of other things like proteins

    How does checking an algorithm’s availability help you decide if sonnet or proteins is amenable to CSI/dFSCI computation?

  208. ” seems that nobody has found any real error in it ”

    I think the summary of errors is as follows (besides that a random search of all sequence space is not the evolutionary hypothesis).

    1)Conservation does not correlate with the percentage of sequence space that is functional. (see Rubisco example–all plants, poor enzyme, human design circumventing local optima). ID simply invokes this contra empirical data.

    2)You specify the specification (sequence conservation) that you state correlates with function, while considering functional specification…what a cluster.

    When presented with alternatives in sequence space (and not just any way of making ATP synthase–a proton transporting membrane bound rotary synthase) of little homology, you declare them an independent design! Isn’t the point what percent of sequence space is functional, and would be found in a search?

    3) Granting your own methodology, you cheat at it. You selected three related sequences for an ATP synthase subunit and aligned them, then declared shared residues necessary.

    I repeated the process with all 23949 F1-alpha ATP synthase sequences. No F1-alternates. No V-ATPases or N- or other odd ones that can perform the same function.

    100% conserved residues: 0

    So using your method, no CSI???? hmmm…..

    maybe the database is off….few oddballs.

    98% conserved…..12 residues. (and there are some clear substitutions in otherwise aligned sequences).

    So maybe next time, try more than.01% of known sequences in defining function/conservation in sequence space.

    Try it yourself:

    http://www.ebi.ac.uk/interpro/.....?start=580

    http://mobyle.pasteur.fr/

  209. 210

    Me_Think says

    How does checking an algorithm’s availability help you decide if sonnet or proteins is amenable to CSI/dFSCI computation?

    I say,

    We are looking for false positives.

    The harder it is to produce a false positive in an easy test like “good English” text string the more confident we can be that false positives are beyond the reach of algorithms in more difficult cases.

    peace

  210. 211

    keith’s said,

    I’m giving gpuccio a pass on the fact that there is always an algorithm that can produce any finite sequence, regardless of what it is.

    I say,

    check it out

    http://en.wikipedia.org/wiki/Computable_number

    and

    http://arxiv.org/abs/1405.0126

    peace

  211. 212

    REC

    Surely you realize the title of this thread is

    An attempt at computing dFSCI for English language

    and not

    An attempt at computing dFSCI for ATP synthase sequences

    peace

  212. “An attempt at computing dFSCI for English language”

    Yes, yes….we’re all concerned with objectively demonstrating Shakespeare has intelligence and nothing else.

  213. FMM,

    An arbitrary finite sequence e[0], e[1], e[2], …, e[n] can be printed by this obvious algorithm:

    for i = 0 to n
      print e[i]

    Gpuccio was sloppy in not excluding this sort of algorithm, but as I said, I’m giving him a pass. He’s got bigger problems than that to deal with.

  214. This has been an interesting post. GP computes dFSCI for the English language and his critics cry out, “Yes, but what is it good for?”

    I am eagerly awaiting his next post, which will likely explain what FSCI is good for, at which time his critics will cry out, “Yes, but can you compute it?”

    Darwinists are fun–maybe a little whacked out–but fun.

  215. “GP computes dFSCI for the English language”

    He offered the previously “calculated” example of ATP synthase.

    …and how does the fias/co of the English language go…it is specified in the dictionary, and makes sense to us…so intelligence?

    See above. Is the dFSCI/o of ATP synthase=0?

  216. StephenB,

    I’m sure it feels good to pretend that ID critics are “whacked out”, but doesn’t it create some cognitive dissonance for you, since in reality the critics don’t conform to your caricature?

    I am quite clear on what can and cannot be calculated, and what the problems are with each of CSI, FSCO/I, and dFSCI:

    Dembski’s problems are that 1) he can’t calculate P(T|H), because H encompasses “Darwinian and other material mechanisms”; and 2) his argument would be circular even if he could calculate it.

    KF’s problem is that although he claims to be using Dembski’s P(T|H), he actually isn’t, because he isn’t taking Darwinian and other material mechanisms into account. It’s painfully obvious in this thread, in which Elizabeth Liddle and I press KF on this problem and he squirms to avoid it.

    Gpuccio avoids KF’s problem by explicitly leaving Darwinian mechanisms out of the numerical calculation. However, that makes his numerical dFSCI value useless, as I explained above. And gpuccio’s dFSCI has a boolean component that does depend on the probability that a sequence or structure can be explained by “Darwinian and other material mechanisms”, so his argument is circular, like Dembski’s.

    All three concepts are fatally flawed and cannot be used to detect design.

  217. 218

    keith’s do you honestly think that (print out)=(Produce) or are you just trying to blow smoke for the hell of it?

    Never mind I know the answer.

    I agree with StephenB Darwinists can be fun.

    peace

  218. FMM,

    Apparently you are unfamiliar with the concept of Kolmogorov complexity.

    What gpuccio should have done in his procedure, but failed to do, was to limit the Kolomogorov complexity of the algorithms considered.

  219. StephenB, fifthmonarchyman
    I could calculate the Entropy of sonnets and claim the derived value proves sonnets are designed because I dont see any sonnet algorithms in nature. How is that different from dFSCI calculation? gp calculates AND checks there are no natural algorithm and then concludes sonnets are designed. Where is the need to ccalculate any thing at all ?

  220. F/N: COllins English Dict: >> sonnet (?s?n?t) prosody n 1. (Poetry) a verse form of Italian origin consisting of 14 lines in iambic pentameter with rhymes arranged according to a fixed scheme, usually divided either into octave and sestet or, in the English form, into three quatrains and a couplet >> KF

  221. KF
    GP has calculated the dFSCI Shakespeare sonnet, so ‘sonnets’ in the context of this theard is Shakespeare sonnets

  222. KS:

    For record — at this stage, with all due respect but in hope of waking you up — I no longer expect you to be responsive to mere facts or reasoning, as I soon came to see committed Marxists based on their behaviour, back in student days. In that context the self-stultifying circles in your retorts to self-evident first principles of right reason, I find to be diagnostic and of concern.

    Now, on your attempted talking points of deflection and dismissal of the FSCO/I metric model I developed (as opposed to showed as a theorem derived in the Geometric QED sense) from Dembski’s one, in the context of discussions involving VJT, Paul Giem and myself in response to P May wearing the persona Mathgrrl:

    Chi_500 = I*S – 500, functionally specific bits beyond the Solar System threshold of complexity

    Just for reference, in Sect A my always linked note you will see a different metric model that goes directly to FSCI values by using info values and two multiplied dummy variables, one for specificity and one for complexity beyond a relevant threshold. That too does the same job, but does not underscore the point that the Dembski model is an info beyond a threshold model. Which replies implicitly to a raft of dismissive critiques.

    Perhaps, you are unaware of my Electronics background, which is famous for the many models of Transistor and Amplifier action that can do the same job from diverse perspectives. And my favourite is to take an h parameter model and simplify, to where we have hie driving a dependent perfect current source with an internal load and an external one, both shunted to signal ground through the power supply. Weird and mystifying at the first, but very effective until parasitic capacitances have to come into play, whereupon, go for a simplified hybrid pi, until you reach points where wires need to be modelled, and you need to turn everything into waveguides. At which point, go get yourself some heavy duty computational simulations.

    (Of course, nowadays, we have SPICE fever, with 40+ variable transistor models to cloud the issue. If that sounds like the problems with Economics, you betcha! For me, if it is useful to take Solow, modify with a Human Capital model and spot linking relationships that speak to real world policy challenges, that has done its day’s work. As in, tech multiplies labour but depends on lagged investments in human capital to bring a work force to a point of being responsive to the tech, in an era where the 9th grade edu that drove the Asian Miracle is not good enough anymore. Then, we see the investment challenge faced by the would be investor, Hayek’s long tail of the temporal/phase structure of investment, malinvestment (perhaps policy induced), instability amplification and roots in a community envt. Which, hath in it the natural, socio-cultural and economic. Thence, interface sectors, on natural resources & hazards and their management, brains as natural resource thus health-edu-welfare issues, and culture of governance vs government institutions and policy making all supporting the requisite pool of effective talent. No need to create a vast body of elaborate pretended Geometric proofs on sets of perfect axioms, reasonable, empirically relevant and supported is good enough for back of the Com 10 envelope Gov’t work, what really rules the world. I trust you can catch the philosophy of modelling just outlined. Models were made for man, and not man for models. And don’t fool yourselves that just because you can come up with dismissive objections you can go back to your favourite un-examined models that sit comfortably with your preferred worldview. In reality we all are going to be tickling a dragon’s tail in any case and should know enough to do so with fear and trembling. And yes the echo of Feynmann’s phrase is intended.)

    The proper judgement of a model is, effectiveness, which is in the end an inductive logic exercise.

    And so models can be mixed, matched and worked with.

    Take the Dembski 2005 metric model, carry out the logging operation on its three components, apply the associative rule and see that we have two constants that may be summed to form a threshold value.

    Note the standard metric of information, as a log metric.

    Then, note that on reasonable analysis, subsystems of the cosmos may be viewed as dynamic stochastic processes that carry out in effect real world Monte Carlo runs that will explore realistic (as opposed to far-fetched) possibilities . . . think about Gibbs’ ensemble of similar systems. It is reasonable to derive a metric model of functionally specific info beyond a threshold, and test it against the base of observable cases. Similarly, to analyse using config space concepts and sampling, by randomness [broadly considered] including dynamic-stochastic processes including in effect random walks with drift (cf. a body of air being blown along, with the air molecules still having a stochastic distribution of molecular velocities and a defined temperature).

    Notice relevant utter sparseness of possible sampling, whether scattershot or random walks from arbitrary initial conditions makes but little difference.

    Compare to 10^57 atoms of our solar system considered as observers of trays of 500 coins each. Flip-observe 10^14 times per second for 10^17 s, observe the comparison of a straw to a cubical haystack comparably thick as our galaxy as sample to possibilities. The samples by flipping can be set up to move short Hamming distance random walk hops as you please, it makes no material difference. The point is, by its nature functionally specific, complex organisation and associated information (FSCO/I) sharply constrains effective configs relative to clumped at random or scattered at random possibilities, and is maximally implausible to be found on a blind watchmaker search.

    Also, the great Darwinist hope of feedback improvement from increasing success presumes starting on an island of function, i.e. it begs the material question.

    Where, too, FSCO/I is quite recognisable and observable antecedent to any metric models, as happened historically, with Orgel and Wicken. It surrounds us in a world of technology. Consistently, it is observed to be caused by design, by intelligently directed configuration. Trillions of cases in point.

    Per induction and the vera causa principle, the best current explanation of FSCO/I whether Shakespeare’s Sonnets or posts in this thread or ABU 6500 3c Mag reels (there is a whole family of related reels in an island of function above and beyond the effect of good old tolerances), or source or object computer code. Going beyond to explain cases where we did not and cannot observe the actual deep past cause, also of D/RNA and the Ribosome system of protein synthesis that uses mRNA as a control tape.

    Which is where the root of objections lies.

    We all routinely recognise FSCO/I and infer design as cause, cf posts in this thread where we generally have no independent, before the fact basis to know they are not lucky noise on the net. After all noise can logically possibly mimic anything. (See the selective hyperskepticism/ hypercredulity problem your argument faces?)

    Just, when the same vera causa inductive logic and like causes like uniformity reasoning cuts across the dominant, lab coat clad evolutionary materialism and its fellow travellers, with their implausibilities that must be taken without question (or else you are “anti-Science” or you are no true Scotsman . . . ), all the hyperskepticism you wish to see gets trotted out. Because as Mom used to say to a very young KF, a man convinced against his will is of the same opinion still.

    I say this, to ask you to pause and think again.

    KF

  223. MT: Someone posted above what is obviously not a Sonnet. KF

  224. Me_Think:

    “If you also have to check for algorithms which can write sonnets, why bother with dFSCI calculations? You could see if you are aware of algorithms which can produce sonnet or whatever you are examining and if there are none, you can infer design. Why calculate dFSCI?”

    Because the non design explanation of an observed functional configuration can be based on random variance, or necessity algorithms, or both. In any case, the RV part, either alone or in the context of a mixed algorithm, must be analyzed by dFSCI or any similar instrument, because it is based on probability.

    That should be very easy to understand. I am really amazed at the insistence with which you and others worry about my “bothering”. I understand it’s for my sake, but please, relax! 🙂

  225. keith s:

    “Sounds like a new rule that you didn’t include in your original procedure. It’s a longstanding bad habit of yours to keep changing your argument in the middle of discussion without acknowledging that you are changing it.

    Also, drc466 need not use the sequence to specify itself. He can prespecify the target as “the winning numbers for this lottery, whatever they turn out to be.” You know the size of the target, and you know the size of the search space. The ratio is tiny. You’ll get a false positive.”

    🙂

    You really try all that you can, don’t you?

    I have always discussed pre-specification for years. Just check. It’s not important to me, because I never use it in any useful context.I have always said that it perfectly legitimate to use a sequence to specify itself, but only as a pre-specification. It expresses the probability of finding that specific sequence again, and the target space is 1. But the sequence bears no functional information, except for the fact that it is in your hands and you can look at its bits. Even a child would understand that.

    Obviously, “the winning numbers for this lottery, whatever they turn out to be“, is a correct specification. It can be used as a post-specification. And it has zero functional complexity: any number is in the target space, because all numbers, if extracted, will be the winning number. Being extracted is in no way connected to the information in the sequence, unless the lottery is fixed. In this case, indeed, a pre-specification of the result which is extremely improbable is a clear sign to infer that the lottery is fixed, that any judge would accept. Unless you believe in lottery precognition (which would have its advantages).

    False positive? Bah!

    Do you even think for a moment before posting?

  226. Me_Think:

    “I don’t think Nature cares if a person says it is restricted to do something because he has a flawed algorithm that restricts it from doing something.”

    Now that we know that you are the Oracle for Nature, why bother making science? Stay available, please.

  227. Reality at #202:

    Thank for your contribution, which allows me to clarify a couple of important points.

    You say:

    “All I see is you claiming that some English text that is obviously designed or already known to be designed is designed. ”

    Ah! But this is exactly the point.

    a) “That is obviously designed” is correct, but the correct scientific question is: why is that obvious? And is that “obvious” reliable”? My procedure answers that question, and identifies design with 100% specificity.

    b) “or already known to be designed” is simply wrong. My procedure does not depend in any way from independent knowledge that the object is designed: we use independent knowledge as a confirmation in the testing of the procedure, as the “gold standard” to build the 2by2 table for the computation of specificity and sensitivity (or any other derived parameter).

    Please, see also this from my post #37:

    “Me_Think at #644:

    “gpuccio explained that dFSCI doesn’t detect design, only confirms if a design is real design or apparent design.”

    I don’t understand what you mean.

    dFSCI is essential to distinguish between true design and apparent design, therefore it is an essential part of scientific design detection.

    If you are not able to distinguish between true design and apparent design, you are making no design detection you are only making recognition of the appearance of design, which is not a scientific procedure because it has a lot of false positives and a lot of false negatives. So, just recognition of the appearance of design is not scientific design detection.

    On the contrary, dFSCI eliminates the false positives, and design detection becomes a scientific reality. Therefore, dFSCI is an essential part of scientific design detection.”

    Regarding you poetry, it is rather simple.

    The piece obviously has no good meaning in English. Therefore, we cannot use that specification for it.

    It is equally obvious that it is made of correct English words. So, it is certainly part of the subset of strings which are made by English words. That is exactly the subset for which I have computed functional information in my OP. As the result was (in the Roy amended form for 500000 English words) 673 bits, we can safely exclude a random origin.

    So, the question is: can this result be the outcome of an algorithm?

    The answer is: yes, but not by any natural algorithm, and not by an algorithm simpler than the observed result. IOWs, the only possible source is a designed algorithm which is more complex than the observed sequence. Therefore the Kolmogorov Complexity of the string cannot be lowered by any algorithm.

    How can I say that? It’s easy. Any algorithm which build sentences made by correct English words must use as an oracle at least a dictionary of English words. Which, in itself, is more complex than the poem you presented.

    Moreover, we can certainly make a further specification (DNA_Jock: is that again painting new targets which were not there before? Is that making the probability arbitrarily small?). Why? Because the poem has an acceptable structure in non rhymed verses. That would certainly increase the functional complexity of the string, but also the algorithm would be more complex (maybe with some advantage for the algorithm here, because after all the verse length is very easy to check algorithmically). However, the algorithm is always much more complex than the observed result, because at least of the oracle it needs.

    So, the conclusion is easy: the poem is certainly designed, either directly or through a designed algorithm.

    Many of these objections arise from the simple fact that you always ignore one of the basic points of my procedure, indeed the first step of it. See my post #15:

    “a) I observe an object, which has its origin in a system and in a certain time span.”

    So, in the end, the question about the algorithms can be formulated as follows:

    “Are we aware of any explicit algorithm which can explain the functional configuration we observe, and which could be available in the system and the time span?”

    So, if your system includes a complex designed algorithm to generate strings made by English words by a Dictionary oracle, then the result we observe can be explained without any further input of functional information by a conscious designer.

    I hope that is clear.

  228. keith s:

    “(I’m giving gpuccio a pass on the fact that there is always an algorithm that can produce any finite sequence, regardless of what it is. He’s having enough trouble defending dFSCI as it is.)”

    Very generous, but not necessary. Please look at my answer to Reality at #228.

  229. keith s:

    “Gpuccio has taken a fatally flawed concept — CSI — and made it even worse.”

    So, I am creative after all! 🙂

  230. kairosfocus #223,

    Nowhere in that logorrheic mess do you address the actual issue I raised earlier:

    KF’s problem is that although he claims to be using Dembski’s P(T|H), he actually isn’t, because he isn’t taking Darwinian and other material mechanisms into account. It’s painfully obvious in this thread, in which Elizabeth Liddle and I press KF on this problem and he squirms to avoid it.

    Please, no more thousand-word tap dances. Address the issue.

  231. REC:

    I referred in that post to no errors found in the computation in the OP. I am well aware of your biological arguments.

    Just as a first comment to what you say: are you really suggesting that there is scarce conservation in that family?

    Sure, if you align 23949 you will have more variance. But then you must use at least the Durston method, with correct methodology, to detect the level of functional conservation.

    With three sequences I was making the simple argument that those chains are highly restrained. Are you saying that it is not true? Have you compared that result with other similar results, even with three chains, for other proteins which are much less conserved, or at all unrelated?

    So I ask again: are you saying that those chains are not highly conserved in that family?

  232. keith s:

    “Gpuccio was sloppy in not excluding this sort of algorithm, but as I said, I’m giving him a pass. He’s got bigger problems than that to deal with.”

    Let me understand: are you saying that an algorithm can print a sequence it already knows?

    Amazing. This is even better than Methinks it’s like a weasel.

    If you meant other things, please explain.

  233. keith s:

    “What gpuccio should have done in his procedure, but failed to do, was to limit the Kolomogorov complexity of the algorithms considered.”

    I have always included a discussion of the Kolmogorov complexity in my detailed discussions about dFSCI. You can check.

    I have included a brief discussion in my answer to Reality at #228 (you can believe it or not, I had not yet read your post about that. I am going in order).

    So, please relate to that.

  234. gpuccio, to Me_Think:

    Because the non design explanation of an observed functional configuration can be based on random variance, or necessity algorithms, or both. In any case, the RV part, either alone or in the context of a mixed algorithm, must be analyzed by dFSCI or any similar instrument, because it is based on probability.

    As you well know, the “RV part” is the only part that factors into the number of bits of dFSCI. You neglect selection, which makes your number useless. KF has the same problem — see my comment above.

    What’s worse, the “RV part” is a standard calculation that was understood by mathematicians long before you were born.

    Thus, your contribution was nothing more than inventing an acronym for an old and well-known probability calculation.

    And you wonder why scientists laugh at ID?

  235. Me_think:

    “GP has calculated the dFSCI Shakespeare sonnet, so ‘sonnets’ in the context of this thread is Shakespeare sonnets”

    No. Wrong. I have taken a Shakespeare sonnet as an example, just to give a good face to the concept. but I have never specified the sonnet as “written by Shakespeare”. That would be foolish.

    Look at the OP:

    ” I don’t infer design because I know of Shakespeare, or because I am fascinated by the poetry (although I am). I infer design simply because this is a piece of language with perfect meaning in english (OK, ancient english).”

    So, being of Shakespeare has never been an issue. Even in my more restricted specifications, I referred to being in rhymed verse and then to being a sonnet in English (for which KF’s definition is perfect).

  236. keith s:

    “As you well know, the “RV part” is the only part that factors into the number of bits of dFSCI. You neglect selection, which makes your number useless. KF has the same problem — see my comment above.”

    I don’t neglect selection. I discuss it separately, on its own merits. And in detail.

    “What’s worse, the “RV part” is a standard calculation that was understood by mathematicians long before you were born.”

    I don’t pretend that I have invented new mathematical methods. I have applied the following:

    a) Calculation of the number of combinationa with repetitions for n, k.

    b) Calculation of the number of permutations of a sequence.

    c) Simple algebric operations, known to all

    to a specific context and to specific ideas.

    “Thus, your contribution was nothing more than inventing an acronym for an old and well-known probability calculation.”

    I am perfectly fine with that. Maybe also to try to discuss some points with some precision. But nothing really original.

    “And you wonder why scientists laugh at ID?”

    Yes. But I accept that others can have a sense of humor different from mine.

  237. gpuccio,

    I have always discussed pre-specification for years. Just check.

    And:

    I have always included a discussion of the Kolmogorov complexity in my detailed discussions about dFSCI. You can check.

    Why are you asking people to track down your comments all over the Internet? These things matter, so include them in the description of your procedure.

    Show some discipline, write up a complete description of your procedure (like a scientist would), and keep it somewhere handy so that you can paste it into discussions like these.

    Instead, you’re posting half-assed descriptions that don’t make sense, and when someone points out an error, you say “Oh, I’ve covered that elsewhere. You can check.”

    Show some consideration for your readers. If it matters to your procedure, cover it in the procedure description.

  238. keiths:

    Thus, your contribution was nothing more than inventing an acronym for an old and well-known probability calculation.

    gpuccio:

    I am perfectly fine with that.

  239. keith s:

    This is a blog. My OPs, which are relatively recent, are an attempt at systematizing more my arguments. I have not yet written an OP on the computation of dFSCI, I am still at its definition. It will come.

    However, as you can see, I am ready to discuss all aspects when prompted. If you accuse me of not being able to discuss everything each time systematically, well, I am certainly culpable for that.

    And I maintain what I have said: I am perfectly fine with that acknowledgement of my small original contribution. What counts are the ideas, not the people who express them. May I quote Stephen King?

    “It is the tale, not he who tells it” (Different Seasons)

  240. keith s:

    The correct question is “Could this sequence have been produced by random variation plus selection, or some other ‘material mechanism’?”

    And keith’s position cannot answer that question and he thinks that is a poor reflection on ID. Also natural selection doesn’t come into play until there is a product that can be “seen” by nature.

    That is the problem-> unguided evolution can’t even muster testable hypotheses.

  241. Dembski’s problems are that 1) he can’t calculate P(T|H), because H encompasses “Darwinian and other material mechanisms”;

    What a joke! Evolutionists can’t provide the probabilities and they think that is our problem?! Evolutionists can’t muster a methodology and they think that is our problem?!

    Amazing

  242. KS,

    You have unfortunately confirmed my concern. I will just note a few points for onlookers:

    1 –> Personalities via loaded language only serve to hamper ability to understand; this problem and other similar problems have dogged your responses to design thought for years, consistently yielding strawman caricatures that you have knocked over.

    2 –> You will kindly note, I have consistently called attention to the full tree of life which as Smithsonian highlights, has OOL at its root. This points to the island of function phenomenon, and that FSCO/I includes that connected with the von Neumann Self Replicator in the cell along with gated encapsulation, protein assembly, code use, integrated metabolism etc.

    3 –> Thus, to the need to first explain reproduction from cellular level up before embedding in claimed mechanisms capable of originating body plans. Starting with the first one of consequence, the living cell.

    4 –> So also, to the pivotal concern of design theory, to get TO islands of function and how to effectively do so: (a) sparse Blind Watchmaker search vs (b) intelligently directed configuration. Of these, only b has actually been observed as capable of causing FSCO/I.

    5 –> Once we have ourselves such, there is no problem in a first life form diversifying incrementally and filling niches in an island of function. chance variation and differential reproductive success and culling [what the misnomer “natural selection” describes . . . nature cannot actually make choices] leading to descent with incremental modifications are fine in such a context. Most of the time, probably, such differential success will only stabilise varieties already existing.

    6 –> The onward problem is to move from such an original body plan to major multicellular body plans by blind watchmaker mechanisms, because of the island of function effect of multi-part interactive organisation to achieve relevant function and the consequent sharp constraint on possible configs relative to possible clumped or scattered arrangements of the same parts. Multiplied by sparseness of possible search, the needle in haystack exploration challenge, whether by scattershot or dynamic-stochastic walk with significant randomness.

    7 –> That is, once you hit the sea of non-function, you have no handy oracle to guide you on blind watchmaker approaches and you have a non-computable result on resource inadequacy. Body plan origin and more specifically, origin of required FSCO/I by blind Watchmaker mechanisms have no good analytic or observed experience grounds.

    8 –> Origin of FSCO/I by intelligently directed configuration aka design is a routine matter, and we have in hand first steps of bio-engineering of life forms. Just yesterday I was looking at a policy document on genetic manipulation of foods.

    9 –> So, accusations of dodging NS on your part are a strawman tactic.

    10 –> Likewise, I outlined how models are developed and validated, underscoring that the Chi_500 model:

    Chi_500 = I*S – 500 functionally specific bits beyond the sol system limit

    . . . is such a model, developed in light of the Dembski 2005 metric model for CSI, and exploiting the fact that logs may be reduced, yielding info metrics in the case of log probabilities. The actual validation is success in recognising cases of design, whilst consistently not generating false positives. False negatives are no problem, it is not intended to spot any and all cases of design . . . the universal decoder wild goose chase.

    11 –> I know, you and TSZ generally wish to fixate on debating log [p(T|h)] — note the consistent omission in your discussions that we are looking at a log-probability metric, i.e. an informational one (and relevant probabilistic hyps as opposed to any and every one that can be dreamed of or suggested or hyperskeptically demanded would be laughed out of court in any t/comms discussion as irrelevant) — in the Dembski expression. I simply point out by referring to real world dynamic-stochastic cases, that abstract probabilities may often be empirically irrelevant, as there are limits to observability in a sol system of 10^57 atoms and 10^17 s, or the observed cosmos extension.

    12 –> As has been repeatedly pointed out and dismissed or ignored, a search of a config space of cardinality W will be a subset and the Blind Watchmaker Search for a Golden Search (S4GS, a riff on Dembski’s S4S) . . . and remember search resource sparseness constraints all along . . . will have to address the power set of cardinality 2^W. And that can cascade on, getting exponentially worse.

    13 –> So, as has been repeatedly pointed out and ignored, the sensible discussion is of reasonably random searches in the original space, with dynamic-stochastic patterns and sparseness, in the face of deeply isolated islands of function. Such searches are maximally unlikely to succeed. On average, they will perform about as well as . . . flat random searches of the space, which with maximal likelihood, will fail. No surprise, to one who ponders what is going on.

    14 –> Where, such gives us a reasonable first estimate of the probability value at stake, if we want to go down that road. P(T) = T/W, starting with either scattershot search or arbitrary initial point dynamic-stochastic walks not reasonably correlated to the structure of the space. No S4GS need apply, in short.

    15 –> This can then reckon with the relevant facts that in a computer memory register there is no constraint on bit chains, we can have 00 01 10 or 11. In D/RNA we can have any of ACGT/U following any other. Confining ourselves to the usual, correctly handed AA’s, any of the 20 may follow any other of the 20.

    16 –> So, reasonably flat distributions are generally reasonable and if we go on to later patterns not driven by chaining but shaped by the after the fact of the DNA code need to be a folding, functioning protein in a cell context, variations in frequency and flexibility in AAs in the chain can be and are factored in in more sophisticated metrics that exploit the avg info per symbol measure – SUM pi log pi. This was discussed with you and other objectors only a few days ago here at UD.

    17 –> Once we start with say a first organism with say 100 AAs per protein avg [Cy-C as model], and at least 100, we see coding for 10,000 AAs and associated regulatory stuff and execution machinery as requisites. Self replication requires correlations between codes and other units. At even 1 bit or a modest fraction thereof per AA, the material point remains, the cell is well past FSCO/I thresholds and is designed.

    18 –> Just the digitally coded FSCI — dFSCI — in the genome is well beyond the threshold. The FSCO/I in the cell is only reasonably explainable on design. The codons just for 10,000 AAs would be 30,000 [which is probably an order of magnitude too low.]

    19 –> And, to go on to novel body plans, reasonable genomes run like 10 – 100+ millions dozens of times over. Not credible on Blind Watchmaker sparse search.

    So, while it is fashionable to impose the ideologically loaded demands of lab coat clad evolutionary materialism and/or fellow travellers even written into question-begging radical redefinitions of science and its methods, the message is plain. Absent question begging the reasonable conclusion is that the world of life is chock full of strong, inductively well warranted, signs of design, with FSCO/I and its subset dFSCI, at their heart.

    KF

  243. kairosfocus said: “Personalities via loaded language only serve to hamper ability to understand…”

    kairosfocus, FOR RECORD, rarely, if ever, have I encountered a person who is as hypocritical and sinister as you are. Your language is thoroughly “loaded” with “personalities”. You constantly accuse Keith S and everyone else who disagrees with you or even just questions you of being evil, radical Marxists, liars, and a long list of other despicable things. Your insulting, sanctimonious, malicious, libelous accusations are FALSE and YOU are in dire need of CORRECTION. Sixty of the best with Mr. Leathers would be a good start in that correction.

  244. gpuccio, are you going to answer my questions about the text I posted above?

  245. Reality- Biological information, as defined by Crick, exists. Your position cannot account for it. And we understand that bothers you.

  246. Gpuccio @ 187

    “ALL post-hoc specifications are suspect.”

    Except when they work perfectly.

    Oh dear. Post-hoc specifications are suspect because they work perfectly.
    You are shooting yourself in the foot here.

    IOWs, we are not trying to sell a drug at the 0.05 threshold of alpha error. I am afraid that you are completely missing the point.

    Actually, the analogy is spot on. You are applying Fisherian testing to your data. You have at least three problems. What you and Dembski are doing is “formulating” (and I use the word loosely) a null hypothesis, examining a data set, and asking “what is the probability of getting a result THIS extreme (or more extreme) under my null?” If the probability is below an appropriate threshold, then the null is rejected.
    Problems A and B are related. A) You have not adequately described your null, the so-called “Chance hypothesis” B) some of you (e.g. Winston Ewert) are performing multiple tests, considering various “chance hypotheses” sequentially, rather than as a whole. I’ve made fun of this previously. Take-home is that, in order to perform the test and arrive at a p value, you need to be able to describe the expected distribution of your metric under the global “Chance Hypothesis”, which includes the effects of iterative selection. One can debate whether this is possible or not, but it is abundantly clear that no-one has even tried.
    You are indulging in Problem C: you are adjusting your metric after you have seen the data. This is the post-hoc specification. It renders the results of your calculations quite useless. By way of illustration, if you give me a sufficiently rich real-world data set for two groups of patients, X and Y, I can demonstrate that X is better than Y. AND I can demonstrate that Y is better than X, so long as I am allowed to mess with the way “better” is measured. Hence the FDA & EMA’s insistence on pre-specified statistical tests.

    No, four problems! Amongst your prob…
    I’ll come in again.
    There’s also a subtle issue around the decision to do a test. If potentially random text is flowing across your desk and you are sitting quietly thinking, “Not sonnety, not sonnety, not sonnety, OOOH! Maybe sonnety, I will test this one!” then you have to be able to model the filtering process, or you’re screwed.

    The example of the limerick is he same as saying that I should consider also the probability of chinese poems. As I have explained, at those levels of improbability those considerations are simply irrelevant.

    “Yes”, and “Sez you”, respectively

    My statement has always been simple: the procedure works empirically, as it is, with 100% specificity. In spite of all your attempts to prove differently.

    I have never made any attempt to prove that your procedure does not work “empirically”. With appropriate post-hoc specifications, it should work every time. On anything.

    Then, if your point is simply to say that the space of proteins is different from the space of language, that is another discussion, which we have already done and that we certainly will do again. But it has nothing to do with logical fallacies, painting targets, and making the probability arbitrarily small. IOWs with the methodology. IOWs, with all the wrong arguments that you have attempted against the general procedure.

    Well I do think they are different, but you asked a specific question at 193 “Is my math wrong?”, so I’ll humor you once more.
    Two errors:
    1) you are equating the ratio of the size of the target space and the size of the total space with a probability. This assumes, incorrectly, that all members are equiprobable.
    2) Since you are allowing repetition in your 120 words, then about one text in 1,700 will have word duplication. You need to adjust your n! term when this happens. Unlike error (1), this one is “not material”
    You need to fix error 1 before you can claim to have calculated dFSCI. Good luck.

    Thank you REC @ 209 for running the alignment on a decent number of ATPases. 12 residues are 98% conserved. I suspect he might have been better off going with his Histone H3 example, but H3 doesn’t look complicated. Re your reply to him at 232. I won’t speak for REC, but I am happy to stipulate that extant, traditional ATP synthase is fairly highly constrained; you could return the favor by recognizing that this constraint informs us about the region immediately surrounding the local optimum, nothing more. Rather , I think the problem is with your cherry-picking of 3 sequences out of 23,949 for your alignment, which smacks of carelessness. Why not use the full data set?
    P.S. I did enjoy kf’s treatise at 223 on how NOT to build an amplifier. Gripping stuff.

  247. Biological specification refers to function. We don’t care what you call it because we understand that your position cannot account for it regardless.

    And if you don’t like our null we happily await your numbers. We have been waiting for over 100 years…

  248. Reality at #245:

    “gpuccio, are you going to answer my questions about the text I posted above?”

    I believed that my post #228 was an answer.

  249. fifthmonarchyman: Exactly how much of a sonnet is new CSI and how much is borrowed from the background is a great question.

    Common ground! Shakespeare exhibit a huge amount of background knowledge, of what we call the human condition.

    fifthmonarchyman: However I’m pretty sure that most folks would say that at least a small amount of Shakespeare’s work was original as apposed borrowed from his environment.

    We would say a great deal was original to Shakespeare.

    fifthmonarchyman: If an algorithm can duplicate the pattern by any means whatsoever as long as it is independent of the source string then I discount the originality of the string.

    A random sequence is original by that definition, and even harder to duplicate.

    gpuccio: I infer design simply because this is a piece of language with perfect meaning in english

    Seems rather parochial and subjective.

  250. DNA_Jock:

    Good thoughts, as usual. But I have to disagree on may things.

    “Oh dear. Post-hoc specifications are suspect because they work perfectly.”

    No. Follow me. I apply the specification “having a good meaning in English”. And I make the computation to exclude possible random results. This is a procedure, well defined. I have generated the specification after seeing the sonnet, therefore it is a post-specification.

    Now, I test my procedure by applying it to ant sequence of 600 characters. I Easily detect those which have good meaning in English, and I infer design for them.

    Please, not that I am applying my model to new data, not only to the original sonnet. IOWs, I am testing my model and validating it.

    Now, two things are possible.

    a) My model works. When I compare the results of my inference with the real origin of the strings (which is known independently, and was not known to me at the time of the inference), I see that all my positive inferences are true positives, there is no false positive, and ny negative inferences are a mix of true negatives and false negatives.

    b) My model does not work, and a lot of false positives are found among my inferred positives.

    It’s as simple as that. What has happened up to now?

    More in next post.

  251. DNA_Jock:

    “A) You have not adequately described your null, the so-called “Chance hypothesis””

    I have. I have assumed a text generated by a random character generator. An uniform probability distribution is the most natural hypothesis, but it is not necessary. Any probability distribution of the characters will do. Do you want to adjust the probability of each single character according to its probability in English? Be my guest. It would be added information, but OK, I am generous today. Now your piece of English with good meaning is nearer. Are you happy? 🙂

  252. DNA_Jock:

    “1) you are equating the ratio of the size of the target space and the size of the total space with a probability. This assumes, incorrectly, that all members are equiprobable.”

    See previous answer. It is true, however, that in my OP I assume a uniform distribution for the characters.

    “2) Since you are allowing repetition in your 120 words, then about one text in 1,700 will have word duplication. You need to adjust your n! term when this happens. Unlike error (1), this one is “not material”

    That only means that some permutations will be repeated. That makes the target space even smaller (not much). As I have computed, anyway, a lower threshold for functional complexity, I can’t see how that is a problem.

    “You need to fix error 1 before you can claim to have calculated dFSCI. Good luck.”

    Do you really believe that? Error 1 is not material too. But even if it could increase the probability of the target space a little, do you really believe that such an adjustment would compensate for my choice to use the target space of all combinations of English words instead of the target space of all the combinations of English words which have good meaning in English?

    OK, you have tried.

  253. Shakespeare had an extensive dictionary, knowledge of grammar, rhyme, scansion, and verse structure; not to mention an understanding of what people enjoy, and of the human condition.

    Can you quantify the amount of additional ‘information’ in a Shakespearean sonnet that is not found in the background knowledge?

  254. I note that you had no response to my commentary regarding your problems with Fisherian testing, but instead chose to focus on comments I made, as an aside, to humor you, about a text-detection procedure that I have always maintained is a deeply flawed analogy. I realize that I may have confused you, when I referred to your sonnet-detector under problem #4, but Problems A, B, C, (and even #4) refer to your protein-design-detector.
    But I’ll keep going with your flawed analogy, because it is irrelevant irreverent fun.
    You make a big deal out of the fact that you are validating your text-detection procedure. As Analytical Method Validation Protocols go, yours leaves something to be desired, but , given my view of the relevance of sonnet-detection to proteins, I will let that slide, and accept that you have been able, by blind-testing known sonnets and known non-sonnets, to get approximate values for the specificity and sensitivity of your sonnet-detection procedure. Whether it is robust has not been tested. The key point here, which you admit, is that you have to make use of a “truth standard” that allows you distinguish sonnets from non-sonnets, quite independent of your detector. This is an essential part of your method for validating your sonnet-detector. Cool.
    Now, if you want to convert your sonnet-detector into a limerick detector, you will have adjust some parameters, based on what you know about limericks. Then, to validate your limerick-detector, you will need a “truth standard” that allows you to independently distinguish limericks from non-limericks.
    Likewise for haiku’s etc., etc.
    What truth standard do you plan on using to validate your protein-design detector?

  255. DNA_Jock:

    “Thank you REC @ 209 for running the alignment on a decent number of ATPases. 12 residues are 98% conserved. I suspect he might have been better off going with his Histone H3 example, but H3 doesn’t look complicated. Re your reply to him at 232. I won’t speak for REC, but I am happy to stipulate that extant, traditional ATP synthase is fairly highly constrained; you could return the favor by recognizing that this constraint informs us about the region immediately surrounding the local optimum, nothing more. Rather , I think the problem is with your cherry-picking of 3 sequences out of 23,949 for your alignment, which smacks of carelessness. Why not use the full data set?”

    This is more important, so I will try to be more precise.

    Durston computes a reduction of uncertainty for each aligned position, on a big number of sequences. Then he sums the results of all positions to get the total functional constraint.

    I am happy that you admit that extant, traditional ATP synthase is fairly highly constrained. That is my point.

    And I am happy to return the favor by recognizing that this constraint informs us about the region surrounding the local optimum. But you should admit that there are very different functional restraints for different functional “local optimums”. And I don’t really agree with your “immediately”. Such a high level of conservation implies a steep separation of the peak in a non functional, or at most scarcely functional valley.

    The discussion about local optimums could lead us very far. One of my favorite papers is the one about the rugged landscape, where the authors conclude that local optimums for the particular function they are testing (the retrieval of viral infectivity in the lab) are so sparse that only starting from a random library of about 10^70 sequences the optimal local optimum (the wild type) could be reasonably found.

    Now, if local optimums are so sparse, and I do believe they are, how can they be so numerous as you seem to believe, so that their brute number could tamper the probability barriers? And if there are so many, and the search is really random, why don’t we see such a variety? Why in the ragged landscape paper the wildtype is by far the best and most functional? If local optimums are distant, and evolve independently by independent lucky hits, we should see a lot of them. We certainly see many of them, but I must remind you that in your post you suggested:

    “Tough to say which local optima have never been found, when all we have to go on is the ones that survived. 2^1000 seems possible, but the number could be a lot higher.”

    The behavior of these local optima is very strange, in darwinist rhetorics. When they are needed to improve probabilities, there are certainly a lot of them. A lot a lot. I had suggested 2^1000 as a mad hyperbole, but that was not enough for you. A lot higher!

    Then, if I ask why we don’t see big traces of all those independent local optima and of their independent optimization, or that in the ragged landscape paper the optimal local optimum could not be found except in the wildtype, suddenly they “have never been found” or “have not survived”.

    OK. for the moment let’s leave the local optima alone.

    I have great expectations for histone H3. You say that it “doesn’t look complicated”, but you are probably aware of its growing importance in understandimg epigenetic regulation. That can well be a strong functional constraint for the sequence.

    Now, I will return the favor again, and I am happy to admit that my “cherry-picking of 3 sequences out of 23,949 for my alignment” is not a rigorous scientific procedure. You say that it “smacks of carelessness”. But that is not the real question.

    There is no doubt that the full procedure is to use the full data set and apply the Durston method (or any other method which can be found to work better and to be more empirically supported).

    So, why did I align three sequences and take only the identities?

    As I have clearly stated many times, that is only my “shortcut”. But, I believe, a honest one. I had not the data from Durston about those two chains, but I was, and still I am, fascinated by their high conservation and very old age, and by their very special function in an even more complex biological machine. So, I have done, explicitly, a simple tradeoff: I have taken only one sequence for each of the three kindoms (and the human one for metazoa) and I have aligned them. And I have given explicitly the results.

    Now, it should be clear that when I only compute the identities in that alignment, giving 4.l3 bits for each one, I am certainly overestimating the absolute identities (obviously, on 23949 sequences, it is much more likely to have some divergence, and I must say that 12 residues with 80% conservation on the whole set looks rather stunning). But I am also not considering all the rest: the similarities, which still I could have emphasized in the basic alignment in BLAST, and all the other restraints which the Durston method can detect by comparing the frequencies of each AA at each site in the sample.

    IOWs, I have badly underestimated on all other sides. In my simple shortcut, I attribute 4.3 bits for each absolute conservation (378), but I attribute nothing for all the other positions, as though they were completely random, which is certainly not the case. On a total of 1082 positions (in the two chains), I have therefore vastly underestimated the fits of 704 AA positions, setting them to 0.

    I have done that for the sake of simplicity, because I have not the time and the tools to make complex biological analyses (I am not a biologist, only a medical doctor), and because going too deep in the details of biology, especially when writing a general OP on an important general concept, is not the best option.

    So, to sum up: REC’s comments are correct, but they do not paint the right scenario. If his purpose is only to attack my “carelessness”, OK, that’s fine. But if he really suggests that my argument about the high conservation of those two chains is not realistic, I have to disagree. Those two chains are extremely conserved, even if compared with many other conserved sequences. And I am really confident that, if we apply the Durston method to the full set proposed by REC, the result will not be very different from my 1600 bits for both sequences, maybe higher.

    I could try to do that, I don’t know if I can. We will see.

  256. DNA_Jock:

    “What truth standard do you plan on using to validate your protein-design detector?”

    The gold standard is used to validate a procedure, and then the procedure is used in cases where we don’t know the gold standard value. Otherwise, it would be useless, and keith would be right.

    In the problem of origins, I think that nobody has an independent “truth standard”. For obvious reasons, we try to understand what we observe, but we have no videos of how it happened on Youtube.

    The design detection procedure is validate with human artifacts. It has 100% specificity in all cases where we can know independently the origin.

    And the procedure is the same: it does not depend on sonnets or limericks: the functional definition can vary, but if a sufficiently high complexity can be linked to the definition, any definition, design is always the cause.

    The application to artifacts that, if confirmed such, are not human, is an inference by analogy. A very strong one, and a very good one. This is the only argument that, in the end, each person individually can opt for. Something like: “I understand the procedure, and it is correct, but I will not do that final jump of the inference by analogy”. OK, I can accept that.

    Let’s call it “the Fifth Amendment in science”. 🙂

  257. gpuccio,

    You show signs of starting to think about landscapes. This is good. You do seem to have mis-understood my point about the difference between “never been found” and “did not survive”.
    You were claiming that nearby optima must be rare or inaccessible because they (according to you) “have never been found”. I pointed out that you cannot draw any conclusions about whether they have been found: all we can observe is the ones that have survived.

    Parodying my position as you did here:
    “Then, if I ask why we don’t see big traces of all those independent local optima and of their independent optimization, or that in the ragged landscape paper the optimal local optimum could not be found except in the wildtype, suddenly they “have never been found” or “have not survived”. ”
    is inaccurate.
    You may be confusing “have never been found” during the course of evolution with “have never been found, i.e. observed” by biologists.

    Do you have the citation for the retrieval of viral infectivity paper? I would love to read it.

    Most importantly, would a shorter answer to my question , “What truth standard do you plan on using to validate your protein-design detector?”, be

    “I don’t have one.”

  258. DNA_Jock:

    I apologize if I have misunderstood that statement.

    Here is the rugged landscape paper. It is very interesting.

    http://www.plosone.org/article.....ne.0000096

    You say:

    Most importantly, would a shorter answer to my question , “What truth standard do you plan on using to validate your protein-design detector?”, be

    “I don’t have one.”

    let’s make it:

    “Nobody has one, for any theory about protein origins”.

    The point is, we have no direct evidence on how proteins originated. All the evidence, whatever the theory or paradigm we use, is indirect.

  259. keith s:

    The calculation adds nothing.

    Now, could you please point this out to gpuccio before he embarrasses himself further? He won’t accept it from me, but he might from you.

    This is not a serious answer. I pointed out to you the importance and purpose of step #3 in Procedure 2. You’re willfully ignoring it.

    Why should anyone here at UD take you seriously?

  260. Gp,

    Thanks for the paper – very cool. I know I have seen Fig 5 before, but I do not recall reviewing the body of the paper previously. I will definitely take a look.
    Tx again

  261. gpuccio:

    This is the essence of DNA_jock’s concern:

    Yes ATP was getting synthesized before humans existed, but the specification “ATP synthase” was generated by humans AFTER the biochemical activity was delineated. And re-defined by you in the light of Nina’s work.

    I almost get the impression that DNA_jock has, unlike countless many others, actually wrestled with Dembski’s work, and likely his “No Free Lunch” presentation of ID. That is good. Very good, if true.

    The discussion is all about “specification,” and whether it is “pre” or “post.” He insists that it be “pre.”

    Here he suffers from a fundamental misunderstanding of Dembski–if he has read him—in which he fails to understand that a “specification” relies on the recognition of a “pattern.” How, then, can you make “specification” prior to recognizing the “pattern” it forms?

    DNA_jock’s position is this: it is you, gpuccio, who are making this “specification.” He is wrong, because it is NOT gpuccio who is making the “specification,” it is Nature itself which ‘recognizes’ this specification—or else we wouldn’t even be talking about it. You, gpuccio, have only “recognized” what Nature has first “recognized.”

    Here’s an analogy:

    The SETI observers “recognize” a “pattern” in some electro-magnetic signal they’ve received. From this “pattern,” they decide that it is so “unnatural” (i.e., it falls outside normal patterns of EM transmissions–IOW, it is highly IMPROBABLE) that its origin is intelligent life outside of our planet, possibly our galaxy.

    Unless something is responsible for this “highly improbable” signal, then why, and how, did the SETI observers conclude that they had evidence of intelligent life beyond earth?

    Per DNA_jock, the SETI observers are doing this all “post-hoc,” and therefore their conclusion is meaningless.

    The criteria for ‘specified, complex information’ is the ‘independence’ of the ‘source’ of the information from the ‘decipher-er’ of the information. As long as DNA_jock takes the position that Nature does not “specify” the information, then there is only one “source” and one “decipher-er,” and it’s you, gpuccio.

    So, DNA_jock, let us ask you directly: do you, or do you not, believe that Nature itself is the “source” of the information found in ATPase?

  262. DNA_Jock,

    How would you create a design-detecting method without testing it on known designs to see if it accurately detected designed artifacts?

    This discussion reminds me of a scientific method for determining authorship called “Stylometry.” Apparently everyone who writes, leaves a statistically-recognizable “wordprint.” The wordprint can identify the author of a document whose authorship is unknown if it can be compared with the known writings of candidate authors.

    This method was tested by having researchers determine the authorship of certain texts that had known authors to see if it came up with false positives. It did not. They then used the method on other writings, including anonymous Federalists Papers essays to determine authorship. Here is the wikipedia article: http://en.wikipedia.org/wiki/Stylometry

  263. gpuccio, I don’t agree that you answered my questions. Here’s a repost with my questions in bold:

    gpuccio, I just don’t understand what you’re trying to prove. All I see is you claiming that some English text that is obviously designed or already known to be designed is designed. How does that demonstrate that IDists can calculate, measure, or compute (Which is the correct term?) CSI, dFSCI, FSCO/I, or FIASCO, and can verify the intelligent design in or of things that are not obviously designed and not known to be designed? And how does what you’re doing establish that CSI, and dFSCI, and FSCO/I are anything other than superficial labels?

    In regard to English text, what can you tell me about the text below? Is it a sonnet, or what? Does it have meaning? Does it have good meaning? If it has meaning or good meaning, what is it? Was it generated by a conscious being, or by an algorithm? How much CSI, and dFSCI, and FSCO/I does it have? Show your work.

    O me, and in the mountain tops with white
    After you want to render more than the zero
    Counterfeit: o thou media love and bullets
    She keeps thee, nor out the very dungeon

    Their end. O you were but though in the dead,
    Even there is best is the marriage of thee
    Brass eternal numbers visual trust of ships
    Masonry, at the perfumed left. Pity

    The other place with vilest worms, or wealth
    Brings. When my love looks be vile world outside
    Newspaper. And this sin they left me first last
    Created; that the vulgar paper tomorrow blooms

    More rich in a several plot, either by guile
    Addition me, have some good thoughts today
    Other give the ear confounds him, deliver’d
    From hands to be well gently bill, and wilt

    Is’t but what need’st thou art as a devil
    To your poem life, being both moon will be dark
    Thy beauty’s rose looks fair imperfect shade,
    ‘you, thou belied with cut from limits far behind

    Look strange shadows doth live. Why didst thou before
    Was true your self cut out the orient when sick
    As newspaper taught of this madding fever!
    Love’s picture then in happy are but never blue

    No leisure gave eyes against original lie
    Far a greater the injuries that which dies
    Wit, since sweets dost deceive and where is bent
    My mind can be so, as soon to dote. If.

    Which, will be thy noon: ah! Let makes up remembrance
    What silent thought itself so, for every one
    Eye an adjunct pleasure unit inconstant
    Stay makes summer’s distillation left me in tears

    Lambs might think the rich in his thoughts
    Might think my sovereign, even so gazed upon
    On a form and bring forth quickly in night
    Her account I not from this title is ending

    My bewailed guilt should example where cast
    Beauty’s brow; and by unions married to frogs
    Kiss the vulgar paper to speak, and wail
    Thee, and hang on her wish sensibility green

  264. Reality,

    I assume in your example, if it were an algorithm, it would choose words randomly from a list of words. Obviously the words themselves are english words that have individual meanings.

    As an aside, gpuccio’s method is supposed to not have any false positives, but it may have a lot of false negatives.

  265. Collin

    As an aside, gpuccio’s method is supposed to not have any false positives, but it may have a lot of false negatives.

    Since the method of calculating “dFSCI” only works for items already known to be designed claiming “no false positives in detecting design” is completely worthless.

  266. Adapa,

    I’m not claiming anything. I’m just saying what the test is supposed to do. My point was that if gpuccio’s test could not tell if it were designed for sure, it does not mean that it is not a useful test. A test that cannot eliminate all false-negatives can still be useful if it can eliminate all false-positives.

    How do you know that dFSCI only works for items already known to be designed? That sounds like an article of faith.

  267. Collin

    I’m not claiming anything. I’m just saying what the test is supposed to do.

    No worries, I know it wasn’t your claim.

    How do you know that dFSCI only works for items already known to be designed? That sounds like an article of faith.

    Because gpuccio told us he needs that info. In his language example he can’t calculate the dFSCI unless he knows the string is an intelligible English phrase. If you give him symbols in a language he can’t understand (i.e Chinese characters) he can’t calculate dFSCI. Again, that makes his test pretty worthless for design detection.

  268. Collin @263,

    You asked

    How would you create a design-detecting method without testing it on known designs to see if it accurately detected designed artifacts?

    I agree, it’s a doozy. I don’t see how you can. You need to be able to validate it against a known “truth standard”. And even then, your validation is only as good as your truth standard. And you may have difficulty ascertaining the domain of candidates over which your test gives valid results. That was my point.

    Thus gpuccio can validate his sonnet-detector and (separately) validate his limerick-detector, but he cannot validate his protein-designer detector, a fact that he was graceful enough to admit.

  269. PaV,

    You are over-stating my position somewhat.

    I do not insist that specification be “pre-“; I do however warn that

    ALL pre-specifications are suspect

    and furthermore that

    …you have to be really, really, really cautious if you are applying a post-facto specification to an event that you have already observed, and then trying to calculate how unlikely that specific event was. You can make the probability arbitrarily small by making the specification arbitrarily precise.

    When you say:

    DNA_jock’s position is this: it is you, gpuccio, who are making this “specification.” He is wrong, because it is NOT gpuccio who is making the “specification,” it is Nature itself which ‘recognizes’ this specification—or else we wouldn’t even be talking about it. You, gpuccio, have only “recognized” what Nature has first “recognized.”

    I must disagree. gpuccio had talked at some length about “ATP synthase” without any qualifiers. Now that he is aware of Nina’s work, The specification has changed to the traditional ATP synthase. The enzyme hasn’t changed. What it does hasn’t changed. Nature hasn’t changed.
    As I said @161:
    In light of Nina et al, gpuccio does two things: 1) He re-draws his circle so that it now excludes Alveolata, and renames the Walker circle “the traditional ATP synthase” 2) he draws a brand-spanking-new circle around the Alveolata bullet hole(s) because it is a “very different complex molecule, made of many different protein sequences, and is a complex example of a different engineering solution”.
    [emphasis in original]

    Pre-specifying a “pattern” is notoriously difficult (just ask a statistician) and, as I understand it, SETI made some important pre-specifications about the frequency bandwidth of an ‘interesting’ signal. But I am pretty ignorant about SETI.

  270. Aaargh: obviously, I meant: ALL post-hoc specifications are suspect.

    What I wrote originally @183

    Aaargh.

  271. DNA-jock,

    In 271, sounds like you are making a post-hoc specification. That’s suspect. 🙂

  272. I think I was asked above if “traditional” ATP synthases have sequence conservation among themselves.

    Of course they do. By definition! How else would you identify that a given protein is a “traditional” ATP synthase from its sequence? Do you get how circular this is?

    To you, the apicomplexian or N-ATPases or alternative ATP synthases are “very different complex molecule, made of many different protein sequences” What isn’t well conserved, you disregard, then say: look how precisely specified this well-conserved group is!

    On the sequence level, here’s what is happening: you go to a database, and pull 3 (or all) of all the F1 alpha units. They are annotated as such because a bioinfomatician has used an algorithm that recognizes conservation with other members in the group. Sequences in the gray area of 20-30% or less in common with another member in the group will excluded by the algorithm. So the “alternative” F1 alphas, the N-ATPases, the apicomplexian ones—likely not even considered in the alignment.

    In nature, they work. They do the job.

    So, “traditional” isn’t Nature’s specification–it is human grouping, a product of algorithmic lumping and splitting. If the question is what percent of sequence space makes a functional unit of an ATP synthase, you aren’t answering it by drawing a bulls-eye on your “traditional” subset of sequences, which itself only represents one fitness peak (which I’ve demonstrated itself appears quite a bit broader than you put it). You can have a look yourself:

    http://mobyle.pasteur.fr/cgi-bin/portal.py#jobs::clustalO-multialign.C13886649652004

  273. Collin,

    Well played, sir. Well played. 🙂

    Thank heavens I referenced to my pre-specification @183

  274. gpuccio: Here is the rugged landscape paper. It is very interesting.

    Couple of interesting points from the paper. They could evolve esterase from completely random libraries, meaning that functional proteins are fairly common in sequence space. The landscape is smooth, from minimal function up to about 40%. And that to increase this much further will probably require recombination, which they did not test. This latter point is consistent with findings from evolutionary computation.

  275. Collin said: “I assume in your example, if it were an algorithm, it would choose words randomly from a list of words.”

    That’s what I’d like gpuccio to figure out and demonstrate with his method; whether my example is randomly generated by a computer algorithm or if it’s designed by a conscious being.

    “Obviously the words themselves are english words that have individual meanings.”

    Yes, to people who understand the meanings of English words. However, gpuccio’s claims go well beyond that.

    “As an aside, gpuccio’s method is supposed to not have any false positives, but it may have a lot of false negatives.”

    That’s what he claims but it’s easy to claim that when using English language that is obviously designed or already known to be designed as examples. Please keep in mind that when he claims there is dFSCI “in” ATP synthase or anything else and that it can be calculated, measured, or computed (All or just one?) he’s actually basing that on the alleged dFSCI in an English language and numerical labeling and/or description of those things. Now that might be okay if his claims about dFSCI helped scientists to understand ATP synthase or anything else but I, and obviously many others, don’t see how his claims and the claims of other IDists help.

  276. Reality at #264 (reposting #202):

    All I see is you claiming that some English text that is obviously designed or already known to be designed is designed.

    From my post #228:

    You say:

    “All I see is you claiming that some English text that is obviously designed or already known to be designed is designed. ”

    Ah! But this is exactly the point.

    a) “That is obviously designed” is correct, but the correct scientific question is: why is that obvious? And is that “obvious” reliable”? My procedure answers that question, and identifies design with 100% specificity.

    b) “or already known to be designed” is simply wrong. My procedure does not depend in any way from independent knowledge that the object is designed: we use independent knowledge as a confirmation in the testing of the procedure, as the “gold standard” to build the 2by2 table for the computation of specificity and sensitivity (or any other derived parameter).

    Please, see also this from my post #37:

    “Me_Think at #644:

    “gpuccio explained that dFSCI doesn’t detect design, only confirms if a design is real design or apparent design.”

    I don’t understand what you mean.

    dFSCI is essential to distinguish between true design and apparent design, therefore it is an essential part of scientific design detection.

    If you are not able to distinguish between true design and apparent design, you are making no design detection you are only making recognition of the appearance of design, which is not a scientific procedure because it has a lot of false positives and a lot of false negatives. So, just recognition of the appearance of design is not scientific design detection.

    On the contrary, dFSCI eliminates the false positives, and design detection becomes a scientific reality. Therefore, dFSCI is an essential part of scientific design detection.”

    Then you ask:

    How does that demonstrate that IDists can calculate, measure, or compute (Which is the correct term?) CSI, dFSCI, FSCO/I, or FIASCO, and can verify the intelligent design in or of things that are not obviously designed and not known to be designed?

    Please see precious point. I have demonstrate that I cam compute dFSCI for a piece of English language. That was exactly the purpose of the OP. I have already explained ad nauseam that the purpose of design detection is not to “verify the intelligent design in things that are not obviously designed”, but to scientifically confirm the design origin of things which appear designed and are designed, distinguishing them from other things which appear designed but are not designed. To recognize some form of design that is not obvious is rather a problem of design recognition. After a design pattern which evokes design is recognized, then it is the task of design detection to measure the complexity linked to the pattern to ascertain if it is real design or apparent design.

    All the applications of my procedure are meant for objects which are “not known to be designed”. The procedure is applied to the object without any direct knowledge of its origin (except for the definition of the system and the time span). So, all the objects to which the procedure is applied could in principle be either designed or not designed.

    After the application of the procedure, an inference is made. If the origin can be independently known, it is used as a gold standard to test the inference. This test allows us to verify that the procedure has no false positives when applied to designed artifacts, including language.

    When applied to objects whose origin cannot be independently assessed, as in the case of biological objects, it is applied as an inference by analogy, based on how well the procedure works for known artifacts.

    It’s really strange that such simple concepts are so difficult to understand for some, even if I have repeated them many times in this same thread.

    Then you say:

    And how does what you’re doing establish that CSI, and dFSCI, and FSCO/I are anything other than superficial labels?

    Is that even a question? I have said what I had to say. You are free to decide if it has meaning, or if it is only a lot of superficial labels.

    Then you ask:

    In regard to English text, what can you tell me about the text below? Is it a sonnet, or what?

    It is a text made with English words, in non rhymed verses. But it is not a sonnet.

    Then you ask:

    Does it have meaning? Does it have good meaning? If it has meaning or good meaning, what is it?

    From my post #228:

    Regarding you poetry, it is rather simple.

    The piece obviously has no good meaning in English. Therefore, we cannot use that specification for it.

    Then you ask:

    Was it generated by a conscious being, or by an algorithm? How much CSI, and dFSCI, and FSCO/I does it have?

    From my post #228:

    It is equally obvious that it is made of correct English words. So, it is certainly part of the subset of strings which are made by English words. That is exactly the subset for which I have computed functional information in my OP. As the result was (in the Roy amended form for 500000 English words) 673 bits, we can safely exclude a random origin. (emphasis added; I will add that the text is much longer than 600 characters, therefore its lower threshold of functional information is much higher. For your satisfaction, I have computed it: 1787 characters; 2009 bits; always assuming 500000 English words).

    So, the question is: can this result be the outcome of an algorithm?

    The answer is: yes, but not by any natural algorithm, and not by an algorithm simpler than the observed result. IOWs, the only possible source is a designed algorithm which is more complex than the observed sequence. Therefore the Kolmogorov Complexity of the string cannot be lowered by any algorithm.

    How can I say that? It’s easy. Any algorithm which build sentences made by correct English words must use as an oracle at least a dictionary of English words. Which, in itself, is more complex than the poem you presented.

    Moreover, we can certainly make a further specification (DNA_Jock: is that again painting new targets which were not there before? Is that making the probability arbitrarily small?). Why? Because the poem has an acceptable structure in non rhymed verses. That would certainly increase the functional complexity of the string, but also the algorithm would be more complex (maybe with some advantage for the algorithm here, because after all the verse length is very easy to check algorithmically). However, the algorithm is always much more complex than the observed result, because at least of the oracle it needs.

    So, the conclusion is easy: the poem is certainly designed, either directly or through a designed algorithm.

    Many of these objections arise from the simple fact that you always ignore one of the basic points of my procedure, indeed the first step of it. See my post #15:

    “a) I observe an object, which has its origin in a system and in a certain time span.”

    So, in the end, the question about the algorithms can be formulated as follows:

    “Are we aware of any explicit algorithm which can explain the functional configuration we observe, and which could be available in the system and the time span?”

    So, if your system includes a complex designed algorithm to generate strings made by English words by a Dictionary oracle, then the result we observe can be explained without any further input of functional information by a conscious designer.

  277. Adapa:

    “Since the method of calculating “dFSCI” only works for items already known to be designed claiming “no false positives in detecting design” is completely worthless.”

    Should I call that simply a lie? OK, let’s say that it is false.

  278. PaV,

    I just realized that I forgot to address your question :

    “So, DNA_jock, let us ask you directly: do you, or do you not, believe that Nature itself is the “source” of the information found in ATPase?”

    I am not sure what you mean, probably because I am somewhat conflicted about what constitutes “information” in biology.

    I am quite confident that the designation “ATP synthase” is of human origin. When first discovered, the activity was referred to as an “ATPase”, because it was detected for its ability to catalyze the reverse reaction. Enzymes do what they do, under different conditions, irrespective of how humans describe or classify them.

    EcoRI cuts DNA at the sequence GAATTC. Lower the salt concentration, and now it will also cut the sequence AATT, mimicking Tsp509I. We call it EcoRI-star activity, but it’s just what the enzyme does. We have to be careful not to reify our description of what we have observed.

    I could argue that UGG does not “code” for tryptophan, it is merely a compound that, under the right circumstances, leads to the incorporation of tryptophan into a growing polypeptide. But, to be perfectly honest, I don’t think of it that way: I do in fact think of an mRNA as carrying “information”. But I might be guilty of reification too. The concept “information” is, I believe, extremely slippery.
    Sorry if this is rambling and potentially not what you were asking about.

  279. Adapa:

    “Because gpuccio told us he needs that info. In his language example he can’t calculate the dFSCI unless he knows the string is an intelligible English phrase. If you give him symbols in a language he can’t understand (i.e Chinese characters) he can’t calculate dFSCI. Again, that makes his test pretty worthless for design detection.”

    As explained many times, the purpose is to distinguish true design from apparent design, not to recognize hidden design which is not apparent. You are obviously confused.

  280. gpuccio

    Should I call that simply a lie? OK, let’s say that it is false.

    Should we call this whole scientifically worthless dFSCI goat rope a desperate attempt by a Creationist to prove to himself his God created everything?

    OK, let’s just say that seems to be the case.

  281. Reality:

    “That’s what I’d like gpuccio to figure out and demonstrate with his method; whether my example is randomly generated by a computer algorithm or if it’s designed by a conscious being.”

    As I have already said, that text can be generated both by a conscious being directly, or by a conscious being indirectly, through a designed algorithm. It is impossible to distinguish the two things. However, the text id designed in both cases. Only a designed algorithm, more complex than the text itself, can output it.

  282. Is the ignorance merely feigned or is real?

  283. Zachriel:

    Shakespeare had an extensive dictionary, knowledge of grammar, rhyme, scansion, and verse structure; not to mention an understanding of what people enjoy, and of the human condition.

    Can you quantify the amount of additional ‘information’ in a Shakespearean sonnet that is not found in the background knowledge?

    Why? How are they relevant to the engineering problem?

  284. I repeat. Dawkins claimed to be able to detect design at “METHINKS IT IS LIKE A WEASEL.”

    Perhaps keiths can tell us how Dawkins measured that.

  285. 286

    Again gpuccio thanks for the great thread. This is an example of how interesting ID discussions can be.

    Zach said,

    A random sequence is original by that definition, and even harder to duplicate.

    I say,

    I agree. Of course my entire endeavor depends on on the human ability to eliminate random sequences.

    If you eliminate the random and the parts of the string that can be algorithmically produced.

    You are left with “CSI”

    peace

  286. DNA_Jock:

    Thanks for both responses. I won’t have time to respond fully until tomorrow PM. But, first, thank you for your engaging style—not the lambasting, antagonistic name-calling we’re used to (actually, your tone is much, much better than that). Secondly, thanks for the honest answer you gave to the subject of information and Nature’s role in that.

    Quickly, here is something to consider: though biologists might be unsure as to the “exact” function of a protein/enzyme like EcoRI, that it has at least ONE function constitutes, in my view of things, “specification.” That is, you have a string of nucleotides that can be transcribed and translated into a protein that is able to interact with other molecules in a determined and precise fashion(s).

    When dealing with protein families, what we’re talking about is like saying that we know that humans spread from Europe to England, and so we can also conclude that humans spread from the west coast of Africa to South America. You can swim the English channel, but you’ll likely die trying to cross the Atlantic.

  287. 288

    gpuccio said

    Only a designed algorithm, more complex than the text itself, can output it.

    I say,

    Bingo

  288. Mung: Why? How are they relevant to the engineering problem?

    They’re essential to the calculation. If Shakespeare just reworked a few things, then he didn’t add as much information than if he created the sonnet ex nihilo.

    fifthmonarchyman: Of course my entire endeavor depends on on the human ability to eliminate random sequences.

    Humans are not good at recognizing randomness.

    fifthmonarchyman: If you eliminate the random and the parts of the string that can be algorithmically produced. You are left with “CSI”

    If you define CSI to exclude algorithms — as you just did, then algorithms can’t create CSI, of course.

    fifthmonarchyman: By objective I mean that my standard is exactly the same for different objects.

    By the way, that’s not what is meant by objective.

  289. 290

    Zac said.

    Humans are not good at recognizing randomness.

    I say,

    If fact the paper I linked demonstrates that humans are quite good at it.

    Zac said,

    If you define CSI to exclude algorithms — as you just did, then algorithms can’t create CSI, of course.

    I say,

    Did you catch the Tee-shirt equation

    CSI=NCF

    Complex Specified information is not computable

    You say,
    that’s not what is meant by objective.

    I say

    Actually what is objective is the number. A CSI value of X yields a design inference whether it found in sonnets or proteins.

    You may want more CSI that I do to make the inference but the value itself is objective.

    P.S.

    Don’t be offended if I don’t respond to you as much as you would like. As You know from our history you frustrate me greatly at times and I don’t want to spoil my overall experience on this thread.

    peace

  290. keiths:

    All of the useful work is done by steps 1 and 2:

    1. Look at a comment longer than 600 characters.
    2. If you recognize it as meaningful English, conclude that it must be designed.

    The calculation adds nothing.

    PaV:

    This is not a serious answer. I pointed out to you the importance and purpose of step #3 in Procedure 2. You’re willfully ignoring it.

    Are you referring to this?

    The whole point of gpuccio’s “procedure” is to compare the recognition of “design” that is naturally made with the use of a particular language, and the values that are generated using dFSCI. Shouldn’t that be clear to you?

    We already know that 600-character posts or sonnets are not formed by pure random variation with no selection.

    The dFSCI number simply confirms that obvious fact, using a calculation that was developed and understood long before gpuccio was born.

    Even gpuccio admits this:

    keiths:

    Thus, your contribution was nothing more than inventing an acronym for an old and well-known probability calculation.

    gpuccio:

    I am perfectly fine with that.

    The dFSCI calculation answers a question that no one is asking. It adds nothing.

  291. Pav @287:
    Quickly, here is something to consider: though biologists might be unsure as to the “exact” function of a protein/enzyme like EcoRI, that it has at least ONE function constitutes, in my view of things, “specification.” That is, you have a string of nucleotides that can be transcribed and translated into a protein that is able to interact with other molecules in a determined and precise fashion(s).

    I think you are still in danger of reifying what is merely a useful handle that we attach to a given protein. So I would re-arrange your sentence to read “that it has, in my view of things, at least ONE function constitutes “specification.”” That is, there’s no specification without a specifier. I might make an exception for pi and e.
    And the idea that proteins interact with other molecules in a “determined and precise fashion” is something of a human construction too.

    When dealing with protein families, what we’re talking about is like saying that we know that humans spread from Europe to England, and so we can also conclude that humans spread from the west coast of Africa to South America. You can swim the English channel, but you’ll likely die trying to cross the Atlantic.

    Ironic that you used this analogy for protein families; humans DID spread from the west coast of Africa to South America. But they didn’t take the ‘direct route’. This is “Axe’s mistake” in “The Evolutionary Accessibility of New Enzymes Functions: A Case Study from the Biotin Pathway”.

  292. dFSCI answers a question science is asking, keith s. And saying something is intelligently designed adds a great deal.

    It also appears that you have no idea how natural selection works. How convenient.

  293. Mung @ 285

    I repeat. Dawkins claimed to be able to detect design at “METHINKS IT IS LIKE A WEASEL.”

    He wrote a program to generate the sentence from alphabets and space. It took 40 generations to get the sentence by ‘Natural Selection’ algorithm .
    He also noted that the “experiment is not intended to show how real evolution works, but does illustrate the advantages gained by a selection mechanism in an evolutionary process”. He didn’t say his program detects design.

  294. Joe

    dFSCI answers a question science is asking, keith s

    Unfortunately the question is “What hopelessly vague and subjective “alphabet soup” of a useless metric will the ID crowd dream up next?

  295. Adapa- Ever find that alleged evolutionary theory? 😛

    BTW biological information was Crick’s idea. Science determined that it is both complex and specified.

  296. Me Think- weasel had nothing to do with natural selection.

  297. Buried in the middle of KF’s latest:

    11 –> I know, you and TSZ generally wish to fixate on debating log [p(T|h)] — note the consistent omission in your discussions that we are looking at a log-probability metric, i.e. an informational one…

    KF,

    That makes no difference, as you full well know or should know. 🙂 You can apply the log in one direction, and the antilog in the other. It’s the same information, just expressed differently.

    (and relevant probabilistic hyps as opposed to any and every one that can be dreamed of or suggested…

    Your P(T|H) doesn’t account for anything other than pure random variation. By omitting selection, you make your number useless and irrelevant for answering the question, “Was this designed?”

  298. keith s- at what steps does selection step in? How does it possibly make a difference seeing that it only eliminates the less fit?

    What you need to do is demonstrate that natural selection is being omitted and that it makes a difference. Otherwise your words are meaningless, as usual.

    Good luck with that

  299. Reality check time:

    A Google Scholar search of the mainstream scientific literature for the last 10 years returns:

    ZERO scientific papers using “dFSCI”

    ZERO scientific papers using “FSCO/I”

    ONE scientific paper using “complex specified information” (CSI is too common an acronym) and that was Elsberry and Shallit’s disemboweling of Dembski’s popular-press published claims.

    Looks like the dFSCI FSCO/I CSI alphabet soup is sure making a huge impact on the scientific world. 🙂

  300. Joe @ 293

    dFSCI answers a question science is asking, keith s. And saying something is intelligently designed adds a great deal.

    I don’t know about dFSCI for language design,but in real world, language are detected using techniques like checking for zipf distribution,clustering low entropy words, degree of local specificity – these were used to confirm that the Voynich manuscript is a structured written language and not some gibberish for fun and was not a hoax.
    I checked the Sonnet zipf and found the rho to be ZipfDistribution(1.16834)

  301. I said: “That’s what I’d like gpuccio to figure out and demonstrate with his method; whether my example is randomly generated by a computer algorithm or if it’s designed by a conscious being.”

    gpuccio said: “As I have already said, that text can be generated both by a conscious being directly, or by a conscious being indirectly, through a designed algorithm. It is impossible to distinguish the two things. However, the text id designed in both cases. Only a designed algorithm, more complex than the text itself, can output it.”

    First, in your 228 and 277 comments you appear to be mixing up and responding to Me_Think and me in an incoherent way. When I first read your 228 comment I stopped reading at the point where you quoted Me_Think because I was looking for your response to me.

    Now that I have read 228 and 277 I’ll say this: You’re playing games, and you destroyed your own arguments. The games you’re playing include, but are not limited to: The way you bounce around with the word “algorithm”. One minute it’s something completely opposite of intentional, specific,
    intelligent design by a conscious being but the next minute all algorithms and the results thereof are non-random, intentional, and intelligently designed because they’re part of a system that is non-randomly, intentionally, and intelligently designed by conscious beings (humans). This statement (and others) of yours confirms what I’m saying about the times you differentiate algorithms and their output from intentional, specific, intelligent design:

    “The only thing I want is to infer that original sonnets are generated by conscious beings, and not by algorithms.”

    You’ll likely come along and try to wiggle out by playing another game with the word “original” or the definition of “sonnets” but don’t bother.

    When challenged or questioned about your claims you also conveniently attach the words “natural”, “complex”, “explicit”, or “complex designed” to “algorithm” just to confuse things.

    Now, you say: “Regarding you poetry, it is rather simple. The piece obviously has no good meaning in English. Therefore, we cannot use that specification for it.”

    How do you know that it has “no good meaning in English”? For all you know it could have plenty of “good meaning in English” to someone. Keep this statement of yours in mind: “And it’s not that I test the sonnet for being a sonnet. I observe that it is a sonnet, and I test how likely it is to have a sonnet of that length in a random system.” You say that you don’t test the sonnet for being a sonnet, but you “observe” and obviously judge whether something is a sonnet or not (and original, complex, or otherwise) by whether it has “good meaning in English” to you or not. You even said in regard to the text I posted: “It is a text made with English words, in non rhymed verses. But it is not a sonnet.”

    Also in that statement of yours you use the term “random system” which in the context of this debate is the same thing as an algorithm that generates random characters or text (including sonnets or sonnet-like text). Of course you try to confuse the issues by also claiming that the algorithms being discussed do not generate anything random because they’re intentionally, specifically, and intelligently designed by conscious beings.

    You say that the text I posted contains 2009 bits (of functional information?). According to the “ID” arbitrary boundary of 500 bits the text therefor contains plenty of CSI-dFSCI-FSCO/I to be labeled as intelligently designed no matter what it means, if anything.

    You also say: “So, the conclusion is easy: the poem is certainly designed, either directly or through a designed algorithm.” There you go again playing a game with your ever changing labeling of algorithms. OF COURSE any computer algorithm that generates characters or text, whether random or otherwise, is designed but the output is NOT necessarily designed. That’s why the output from a random generator (an algorithm) is called random.

    Notice this in your conclusion: “…the conclusion is easy: the poem is certainly designed…”, even though upthread you said this: “And I will never infer design for a sequence which is the result of a random character generator.”

    Well, guess what? The text ‘sequence’ I posted is the output of multiple random text generators that are called sonnet generators. What was that you said about no false positives?

    You also need to rethink this bold statement of yours: “My statement has always been simple: the procedure works empirically, as it is, with 100% specificity. In spite of all your attempts to prove differently.”

  302. Joe said: “And saying something is intelligently designed adds a great deal.”

    Like what? Allah-did-it?

  303. Adapa & Reality: Descriptive terms linked to observables and related analyses and abbreviations do not gain their credibility or substance from appeals to authority. Deal with the substance, and in the case of the relevant general matter, functionally specific complex organisation wherein functionality arises through correct arrangement and coupling of component parts per a wiring diagram (which is informational), that is a commonplace of a technological era. It even applies to the symbol strings we use to communicate textually: S-T-R-I-N-G . . . That’s the real reality. KF

  304. Adapa, until you come to a first functional configuration of organised components, you are in no position to deal with hill-climbing by differential reproductive success leading to culling. So, the problem is to cross the sea of non-functional configs (starting at molecular and cellular levels) to reach zones where reproduction of relevant body plans is possible. Starting with the first one, OOL. This case has the added value of requiring accounting for the von Neumann self replicator instantiated in the living cell. Such phenomena are FSCO/I rich. Blind watchmaker mechanisms have zero track record or prospective success of creating FSCO/I starting with Darwin’s warm little pond or the like. FSCO/I is routinely produced by intelligently directed configuration, to the point where we are inductively justified in concluding it is a reliable sign of such design. That puts design at the table from OOL on, never mind what evo mat ideologues in lab coats and their fellow travellers want to decree. KF

  305. fifthmonarchyman:

    Your interventions are really interesting. Please, keep us updated about your ideas and work. 🙂

  306. keith s:

    “I am perfectly fine with that.”

    Just to be clear, when I say that I am agreeing with your estimate of my very small personal role, certainly not with your estimate of the ideas that I express, which are mostly not mine.

    I am perfectly responsible, of course, for how I express them, for the good and for the bad.

  307. Me_Think:

    “He wrote a program to generate the sentence from alphabets and space. It took 40 generations to get the sentence by ‘Natural Selection’ algorithm .”

    Let me understand. So, the phrase “Methinks it’s like a weasel” was not in the algorithm, and came about by natural selection? Really interesting! Can you confirm that?

  308. Me_Think at #301:

    Those techniques are perfectly valid. They are procedures of “language structure recognition”, and are derived form what we know of linguistic structures. But they are not techniques of “design detection”, in the sense we are discussing here.

    I will be more clear. Let’s say that I have a piece of text which I don’t understand, but which could have some meaning in some unknown language. Like the Voynich manuscript. Then I can apply those procedures, and get the result that it has a recognizable language structure.

    OK. Now I can use that fact as a specification. I am at the same point where I am when I recognize that a piece of text has good meaning in English, only my specification is different. Now it is “having a language structure according to the procedure I used”. I cannot use meaning to specify the text, because I don’t understand what it means, indeed I am not even sure that it has some meaning. It could well have a language structure, and not a meaning.

    However, I have still the problem of design detection. As we have said, having a specification is not enough to infer design. We have to compute the dFSCI linked to that specification. So, we have to ask: how much specific information is necessary to have a piece of text of that length which can be recognized as structured language by the procedure I adopted? And we have to compute the target space and the search space.

    IOWs, we have to make a computation like the one I did for meaningful text in English. I have not done that, and I have no reason to do it.

    It is rather obvious that for the Voynich manuscript, that computation will allow a design inference. Why? Because it is a very long text, and any specific structure that can be positive to a detection procedure of that kind (of which I know nothing in detail) should be more than enough to exclude a random origin. But, again, I have not tried any specific computation here.

    So, I cannot even exclude a non designed algorithmic origin, because I don’t know which regularities are checked by the procedure. However, if those regularities are derived from real languages, it it very likely that a non designed algorithmic origin is really unlikely. I will not say anything more about a scenario that I cannot analyze in detail.

    The point is: a function/meaning specification, be it obvious or not, is never enough for a design inference. We always need a formal analysis of the complexity linked to that specification.

  309. LoL! Reality doesn’t understand the importance of determining something was intelligently designed! Reality must think that archaeology, forensic science and SETI are all wastes of time.

  310. Adapa- How many peer-reviewed papers use the blind watchmaker thesis?

  311. GP:

    I am astonished that people are still promoting Weasel and kin. Let me clip my remarks at IOSE:

    __________

    >> vi: At this point, it is common for some to suggest that Dawkins’ “Mt Improbable” can be climbed by the easy back-slope, step by step to the peak, as chance variations that give an increase in performance are rewarded with advantages that allow them to become the next stage of progress. And, of course, the “methinks it is like a weasel” example shows how a string of 28 random characters can, after maybe 40 – 60 generations, become the target phrase. For instance, in his best-selling The Blind Watchmaker (1986), pp. 48 ff. Dawkins published the following computer simulation “run”:

    1 WDL*MNLT*DTJBKWIRZREZLMQCO*P
    2? WDLTMNLT*DTJBSWIRZREZLMQCO*P
    10 MDLDMNLS*ITJISWHRZREZ*MECS*P
    20 MELDINLS*IT*ISWPRKE*Z*WECSEL
    30 METHINGS*IT*ISWLIKE*B*WECSEL
    40 METHINKS*IT*IS*LIKE*I*WEASEL
    43 METHINKS*IT*IS*LIKE*A*WEASEL

    vii: What is not so commonplace, is to see an admission of the implications of the stunning admission Dawkins had to make even as he presented the Weasel phrase “example” of the power of so-called “cumulative selection,” even when the caveats are cited:

    I don’t know who it was first pointed out that, given enough time, a monkey bashing away at random on a typewriter could produce all the works of Shakespeare. The operative phrase is, of course, given enough time. [[NB: cf. Wikipedia on the Infinite Monkeys theorem here, to see how unfortunately misleading this example is.] Let us limit the task facing our monkey somewhat. Suppose that he has to produce, not the complete works of Shakespeare but just the short sentence ‘Methinks it is like a weasel’, and we shall make it relatively easy by giving him a typewriter with a restricted keyboard, one with just the 26 (capital) letters, and a space bar. How long will he take to write this one little sentence? . . . .

    It . . . begins by choosing a random sequence of 28 letters … it duplicates it repeatedly, but with a certain chance of random error – ‘mutation’ – in the copying. The computer examines the mutant nonsense phrases, the ‘progeny’ of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL . . . . What matters is the difference between the time taken by cumulative selection, and the time which the same computer, working flat out at the same rate, would take to reach the target phrase if it were forced to use the other procedure of single-step selection: about a million million million million million years. This is more than a million million million times as long as the universe has so far existed . . . .

    Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. One of these is that, in each generation of selective ‘breeding’, the mutant ‘progeny’ phrases were judged according to the criterion of resemblance to a distant ideal target, the phrase METHINKS IT IS LIKE A WEASEL. Life isn’t like that. Evolution has no long-term goal. There is no long-distance target, no final perfection to serve as a criterion for selection, although human vanity cherishes the absurd notion that our species is the final goal of evolution. In real life, the criterion for selection is always short-term, either simple survival or, more generally, reproductive success. [[TBW, Ch 3, as cited by Wikipedia, various emphases, highlights and colours added.]

    viii: In short, here cumulative selection “works” by rewarding non-functional phrases that happen to be closer to the already known target. This is the very opposite of natural selection on already present difference in function. Dawkins’ weasel is not a good model of what evolution is supposed to do.

    ix: At most, it illustrates that once we are already on an island of function, chance variation and differences in reproductive success may lead to specialisation to fit particular niches. Which is accepted by all, including modern Young Earth Creationists. And, more sophisticated genetic algorithms have very similar failings. For, (a) they implicitly start within an island of function, that (b) has a predominantly smoothly rising slope that gently leads to peaks of performance so that “hill-climbing” on “warmer/colder” signals will usually get you pointed the right way.

    x: In short, GA’s do not only start on the shores of an island of function, but also the adaptation targets are implicitly pre-loaded into the program [[even in cases where they are allowed to wiggle about a bit] and so are the “hill-climbing algorithm” means to climb up to them. This point has been highlighted by famed mathematician Gregory Chaitin, in a recent paper, Life as Evolving Software (Sept. 7, 2011):

    . . . we present an information-theoretic analysis of Darwin’s theory of evolution, modeled as a hill-climbing algorithm on a ?tness landscape. Our space of possible organisms consists of computer programs, which are subjected to random mutations. We study the random walk of increasing ?tness made by a single mutating organism. [[p.1]

    xi: Plainly, this more sophisticated approach is a model of optimising adaptation by generic hill-climbing, within an island of function; i.e. this is at best a model of micro-evolution within a body plan, not origin of such complex, integrated body plans.

    xii: So, while engineers — classic intelligent designers! — may well find such algorithms quite useful in some cases of optimisation and system design, they fail the red-herring- strawman test when they are presented as models of microbe to man evolution.

    xiii: For, they do not answer to the real challenge posed by the design theorists: how to get to an island of complex function — i.e. to a new body plan that for first life would require something like 100,000 base pairs of DNA and associated molecular machinery, and for other body plans from trees to bees, bats, birds snakes, worms and us, at least 10 million bases, dozens of times over — without intelligent direction.

    xiv: Instead, we can present a key fact, one that Weasel actually inadvertently demonstrates. That is: in EVERY instance of such a case of CSI, E from such a zone of interest or island of function, T, where we directly know the cause by experience or observation, it originates by similar intelligent design. And, given the long odds involved to get such an E by pure chance — you cannot have a hill-climbing success amplifier until you first have functional success! — that is no surprise at all. >>
    ___________

    Corrections have been on record for many years. (Where also, if one examines the printed cases, released by Dawkins, whenever a letter becomes correct, it never reverts. Conveniently, the original code is not available. This phenomenon can be duplicated by creating code that mimics what Dawkins claims, and choosing “good” examples. This speaks to likely side tracks that evade the main issue already documented above.)

    The bottomline is simple: admissions that reveal the fallacy of irrelevance were right there in TBW right from the beginning, decades ago.

    The resort to such at this late date is a mark of patent desperation.

    KF

  312. Reality at #302:

    You are really confused. I am sure you are in good faith, but you are really confused.

    If I have not been clear enough, and that is in part a cause of your confusion, I apologize. But believe me, I am trying my best to be clear enough.

    I will try again.

    The elimination of an algorithmic origin refers, as I have said many times, to any algorithm which could be available in the system or the time span we are considering. In general it refers to non designed algorithms, if our purpose is to exclude design completely. Or we can choose to accept some algorithms which are already part of the system, even if they are or could be designed, if our purpose is only to check if additional design was necessary to generate the output we observe.

    I will be more clear. Let’s say that our system is our planet and the time span its whole existence of about 4 billion years. What we want to analyze is if all the life forms we observe today could be generated by non design mechanisms assuming a planet without any life at the beginning, and the time span available. In this case, we can only consider algorithms available in the original scenario, or which could have become availabe after that, always by non design mechanisms. That means that, before using NS as a possible mechanism, we have to explain how living beings which reproduce, or some equivalent thing, originated by non design mechanisms. IOWs we have to explain OOL before we can use life to explain the further evolution of species.

    But we can also accept original life in some form, like prokaryotes, as a given, ans ask if what happens after that, the evolution of biological information, can be explained by non design mechanisms. This is a perfectly correct question. In this case, we are no more asking, at least for the moment, if original life (prokaryotes, in particular, in this example) could originate by non design mechanisms: we just take it as part of the original system, and we can use it, and its algorithms of reproduction, as a part of out explanation of what happens after that. IOWs we can use NF, and see if it helps.

    Is that clear? Please, check what I wrote to you in post #228:

    Many of these objections arise from the simple fact that you always ignore one of the basic points of my procedure, indeed the first step of it. See my post #15:

    “a) I observe an object, which has its origin in a system and in a certain time span.”

    So, in the end, the question about the algorithms can be formulated as follows:

    “Are we aware of any explicit algorithm which can explain the functional configuration we observe, and which could be available in the system and the time span?”

    So, if your system includes a complex designed algorithm to generate strings made by English words by a Dictionary oracle, then the result we observe can be explained without any further input of functional information by a conscious designer.

    What is not clear in that?

    More about algorithms.

    I can always make a designed algorithm which can output dFSCI, if I put enough complexity in the algorithm. Even original dFSCI, if I put that original dFSCI in the algorithm.

    I can write an original piece of free verses, for example:

    “How wonderful, exciting and frustrating
    at the same time
    it is to comment at uncommon descent!”

    which is probably absolutely original as a piece of poetry (not so good, however!), and then write a simple algorithm which outputs it as printed text. OK, and so?

    My algorithm is designed, but above all the original dFSCI in it was designed by me.

    The weasel algorithm is something like that, only the phrase is not original, but it is in the algorithm. The algorithm could have simply given it as output. Instead, it tries to arrive at it through RV and intelligent selection based on the previous knowledge of the phrase itself. And guess what? It succeeds! Ah. the wonders of the darwinist mind.

    You say:

    “You say that the text I posted contains 2009 bits (of functional information?). According to the “ID” arbitrary boundary of 500 bits the text therefor contains plenty of CSI-dFSCI-FSCO/I to be labeled as intelligently designed no matter what it means, if anything.”

    Yes, 2009 bits of functional information linked to the specification “a piece of text made with english words” and the length of the text. That’s how I have made the computation. And yes, I infer that it is designed. Either directly or through a designed algorithm which includes as an oracle the english dictionary, and therefore is more complex than the text itself. Why do I consider the algorithm? It’s simple. The text has no meaning, as a whole, in English. The single words, instead, are correct, therefore have a meaning as words.

    IOWs. the text can be considered as a list of English words. (I am not considering for the moment the structure in non rhymed words, which however is very easy to be obtained algorithmically). A list of objects, a random list of objects, is very easy to be obtained by a simple algorithm, if the algorithm has a list of those objects and simply selects randomly some of them. That’s why a simple algorithm with an English Dictionary (which is very complex) can easily do the trick.

    Now, a conscious being can do the same thing intentionally: I can just write a list of English words that I know. That would be directly designed, while the list generated by a computer who selects form a digital dictionary would be indireclty designed.

    So, that output is designed anyway. To be consistent with my original formulation, the final judgement is as follows:

    If your original system does not include a computer with a program which can generate a list of words form a digital dictionary of English, than the text is certainly directly designed by some conscious being.

    If your original system includes all that, than we cannot say: the text could be the output of that program, or it could be the result of a direct act of writing from a conscious designer. We cannot say, because there is no difference in the output itself which can guide us.

    The scenario is different if we have a text of that length which has good meaning in English. Then with that length, I would be sure that the text is directly designed. Why? Because we know well that no existing designed algorithm, at least at present, can generate a piece of text of that length which has good meaning. Unless the text is already in the algorithm itself. In that case, we have only a printing algorithm.

    I am really amazed at this statement of yours:

    “How do you know that it has “no good meaning in English”? For all you know it could have plenty of “good meaning in English” to someone. Keep this statement of yours in mind: “And it’s not that I test the sonnet for being a sonnet. I observe that it is a sonnet, and I test how likely it is to have a sonnet of that length in a random system.” You say that you don’t test the sonnet for being a sonnet, but you “observe” and obviously judge whether something is a sonnet or not (and original, complex, or otherwise) by whether it has “good meaning in English” to you or not. You even said in regard to the text I posted: “It is a text made with English words, in non rhymed verses. But it is not a sonnet.”

    How do I know that it has “no good meaning in English”? Are you serious? If it has meaning for you, please explain what that meaning is.

    “Which, will be thy noon: ah! Let makes up remembrance
    What silent thought itself so, for every one
    Eye an adjunct pleasure unit inconstant
    Stay makes summer’s distillation left me in tears”

    Meaning? Bah! Are you kidding? This is obviously a list either of single words or, more probably, of pieces of phrases. Meaning means that the whole piece of text conveys a consistent information which evokes a clear cognitive experience in our mind. Not individual disjointed phrases which have been obviously taken from some pre-compiled list.

    The posts here have good meaning in English (if there are no errors or typos in them). Yes, even keith’s. 🙂

    You say:

    “You say that you don’t test the sonnet for being a sonnet, but you “observe” and obviously judge whether something is a sonnet or not”

    Yes, and so?. A sonnet has specific formal characteristics, for example the number of verses. The text you gave is not a sonnet, and anybody can easily see it. It’s like observing that a blue object is not red. Are you really saying what you are saying?

    You say:

    “You even said in regard to the text I posted: “It is a text made with English words, in non rhymed verses. But it is not a sonnet.”

    Yes, I am culpable for that. I gave a correct description of what I was seeing. This is no inference or procedure. It’s simply a true observation. What is wrong in it?

    It is a text made with English words. Is that wrong?

    in non rhymed verses. They are verses. Not very good, not specific types of verses, but under any generic definition of verses they are verses. Did I miss the rhymes?

    But it is not a sonnet. It is not. Among other things, sonnets cannot be so long.

    So, what is the problem?

    And why do you sneak in, in parentheses: “(and original, complex, or otherwise)”?

    Those are other problems. I did not judge if it is original or not. The complexity I computed. That has nothing to do with what is simply observable (english words, no good meaning, not being a sonnet, and so on).

    Then you say:

    “Notice this in your conclusion: “…the conclusion is easy: the poem is certainly designed…”, even though upthread you said this: “And I will never infer design for a sequence which is the result of a random character generator.””

    And I maintain that. Maybe I can clarify a point which could confound you. When I speak of a random character generator, I am referring to an easy way to simulate a random system. Here we have not defined which random system could explain the origin of a text. In a sense, we are simulating a real problem. The meaning of “a random character generator” is a computer program which outputs random individual characters, exactly as it could happen in some natural random system whose outputs can be considered as characters. So, while I am perfectly aware that a computer program which generates random character is a designed thing, but I implied that it could be accepted as a convenient source if random strings. A random character generator, however, has no added information about the strings it generates: that’s why it is a random character generator.

    Instead, you conclude:

    “Well, guess what? The text ‘sequence’ I posted is the output of multiple random text generators that are called sonnet generators. What was that you said about no false positives?”

    This is really funny. Your “random text generator” contains the words, or more probably phrases, that randomly compose the text you gave. All the information in the text (which however is not a good meaning of the whole text) is already in the software. All the software does is to randomize the sequence of those pieces of information, and indeed that sequence is completely random, and that’s why the text as a whole has no meaning.

    I correctly inferred that the text was designed, either directly or indirectly by some software which was more complex than the text itself, and therefore designed. False positive? Why? It is a true positive. My inference is completely correct.

    Your “last word”:

    “You also need to rethink this bold statement of yours: “My statement has always been simple: the procedure works empirically, as it is, with 100% specificity. In spite of all your attempts to prove differently.””

    No. It remains bold, and it remains true.

  313. KF:

    Yes, the weasel is a die hard animal! 🙂

  314. Good points KF. I noticed that in Dawkin’s book as well.

    He mentions -briefly- that what follows is not a good example of evolution as he proposes it and then proceeds to write page after page describing a program that
    demonstrates that if an intelligently designed algorithm is designed to evolve to a target, it can reach that target.

    He managed to get this little confidence trick passed a lot of people apparently. A shell game shuffle in prose.

  315. 316

    gpuccio said,

    I can always make a designed algorithm which can output dFSCI, if I put enough complexity in the algorithm. Even original dFSCI, if I put that original dFSCI in the algorithm.

    I say

    Exactly.

    When we say that algorithms are incapable of producing CSI. It is always assumed that cheating is not permitted.

    I find it truly amazing that that simple obvious fact has to be constantly repeated.

    Recall that in this very thread Keiths proposed an algorithm that simply printed out already existing sequences as a way to create false positives.

    peace

  316. gpuccio @ 308

    Let me understand. So, the phrase “Methinks it’s like a weasel” was not in the algorithm, and came about by natural selection? Really interesting! Can you confirm that?

    Of course not ! ‘Nature Selection’ (note the quotes) is just a piece of code in his program which mimics selection.His aim was to show how the statement can be reached faster with ‘Natural selection’ algorithm you forgot to quote my comment in full. I clearly stated this:

    @294
    He also noted that the “experiment is not intended to show how real evolution works, but does illustrate the advantages gained by a selection mechanism in an evolutionary process”. He didn’t say his program detects design.

    You keep forgetting, evolution is NOT hunting for specific patterns

    kairosfocus @ 312
    I am not promoting it at all. Note my response to gp. No one in his right mind thinks Evolution is searching for a pre-specified pattern, least of all Dawkins.

  317. Steve @ 315

    Good points KF. I noticed that in Dawkin’s book as well.

    Note my response @ 317 to gp and KF
    No one in his right mind thinks Evolution is searching for a pre-specified pattern, least of all Dawkins.

  318. 319

    Me_Think said,

    No one in his right mind thinks Evolution is searching for a pre-specified pattern, least of all Dawkins.

    I say

    Yet every proposed algorithm that yields false positives does just that.

    Don’t you find that odd?

    peace

  319. Really, Kairosfocus, you want to go there?

    Let me see if I have this straight.

    Dawkins writes a popular book, The Blind Watchmaker, in which he introduces a couple of toy examples to illustrate the power of differential reproduction. He analogizes from genes to “memes”, introducing the idea that ideas can propagate by differential reproduction. icanhascheezburger follows.

    He also introduces a toy search, Weasel, in which he contrasts the performance of Monkeys at a typewriter with a hill-climbing algorithm. When he introduces it, he points out that evolution is not like this, saying

    Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. One of these is that, in each generation of selective ‘breeding’, the mutant ‘progeny’ phrases were judged according to the criterion of resemblance to a distant ideal target, the phrase METHINKS IT IS LIKE A WEASEL. Life isn’t like that. Evolution has no long-term goal. There is no long-distance target, no final perfection to serve as a criterion for selection, although human vanity cherishes the absurd notion that our species is the final goal of evolution. In real life, the criterion for selection is always short-term, either simple survival or, more generally, reproductive success.

    Dembski realizes that even this toy presents a problem for his CoI law. He claims, erroneously, that Weasel contains a latching mechanism. When it is pointed out that it does not, the true weaseling begins. My favorite: there was a latching mechanism in the TBW version of Weasel, but it was removed for the BBC show (where it is clear that correct letters are getting mutated).

    Massive butthurt ensues. So much so that Dembski and Marks use Weasel as an exemplar of a “Partitioned Search”, which it very obviously is not, in their IEEE paper.

    Kairosfocus, who is apparently never wrong, modifies his “Latching mechanism” claim to quasi-latching, or pseudo-latching, and furthermore tries to defend the claim that D&M are correct to refer to Weasel as a Partitioned Search.

    Much hilarity ensues.

    Corrections have been on record for many years. (Where also, if one examines the printed cases, released by Dawkins, whenever a letter becomes correct, it never reverts. Conveniently, the original code is not available. This phenomenon can be duplicated by creating code that mimics what Dawkins claims, and choosing “good” examples. This speaks to likely side tracks that evade the main issue already documented above.)

    The bottomline is simple: admissions that reveal the fallacy of irrelevance were right there in TBW right from the beginning, decades ago.

    The resort to such at this late date is a mark of patent desperation.

    ROFLMAO

    If you want to see truly patent desperation, just enter “Question 10” in Uncommondescent’s search box.

  320. gpuccio @ 308

    Let me understand. So, the phrase “Methinks it’s like a weasel” was not in the algorithm, and came about by natural selection? Really interesting! Can you confirm that?

    In addition to Me_think’s reply at 317, I would add:

    “Methinks it’s like a weasel” was in the oracle, not in the search algorithm. If you want to discuss search algorithms, it is a good idea to (conceptually, at least) separate the searcher from the oracle.

    Dembski’s “Deterministic Search” performs much, much better than the “Partitioned Search”, which reduces its “Active Information” by deliberately ignoring useful information provided by the oracle (his Partitioned Search ignores the oracle every time the oracle says “This letter is wrong”).

    The Weasel oracle only provides the Hamming distance to the target. Much less to go on.

  321. DJ: Weasel as a case of cumulative targetted search has the information in from the first. KF

    MT: As a matter of fact, Weasel has often been promoted from the 1980’s on in print and TV etc, as showing the creative powers of CV + DRS –> DWM –> TOL. Many, many people were persuaded thereby . . . as I can recall from how people spoke of it then and in the years since then. And provisions and caveats that eat up the point, should have led to the matter never having been raised in that way with such an example; I recall here the debate in physics edu about how legitimate it was to use instruments constructed on the premise of Ohm’s Law to test the validity of said law, for students, and on how much should be said to them. But then, all of this is a side point at best. The main one is that DFSCI is real and reasonably observable and quantifiable.

    KF

  322. DNA_Jock:

    OK, I can agree, but the fact remains that the oracle must be part of the algorithm, if the algorithm must work.

    I am not discussing here Dawkin’s intentions or the interpretations of his intentions, I am not interested in that.

    The important point is: any algorithm which generates meaningful complex language must have that language in itself, either in the oracle or in the rest of the algorithm.

    I think the most important point of all, which goes beyond the discussion about weasel or similar, is: what are the intrinsic limitations of an algorithm, however complex?

    This is the point I have discussed here with fifthmonarchyman and which is related to Penrose’s books about the Godel theorem and its consequences for theories of human cognition, and to the article by Bartlett about Turing oracles.

    My personal position is that conscious experiences have a fundamental role in human cognition and in the generation of original dFSCI. Therefore, a non conscious algorithm, however complex, has severe limitations if compared with a conscious cognizer.

    Of course, growing degrees of added information and of computational complexity can help a non conscious algorithm to simulate, to growing degrees, human cognition and the generation of dFSCI. But that comes always at the price of a higher increase in the algorithm than in the output. And it can never really generate new original specifications, for example new meanings which have not been in some way pre-coded, or new functions that have not in some way pre-defined.

    Why? Because a non conscious algorithm has no experience of meaning and no experience of purpose. It has literally no idea of what meaning and purpose are. And the real meaning of meaning and purpose cannot be coded, because they are conscious, subjective experiences, and only those beings who have those experiences can recognize them.

    There is only one scenario where an algorithm can apparently generate specified complexity higher than its own complexity. I have discussed that before.

    Let’s say that we have a complex algorithm that can generate the binary digits of pi, by a complex computation. Let’s say that the functional complexity of the algorithm is n bits. Now, the algorithm starts to work, and it starts top compute the digits of pi. At some point, it will have computed n+1 digits of pi. And, obviously, it can go on.

    At this point, the functional complexity of the outcome is apparently higher than the functional complexity of the algorithm which has generated it. And, going on, it can be made as higher as wanted. After all, the functional complexity of the output is equal to its complexity in bits, because there is only one binary string which corresponds to pi.

    But the point is, pi is an outcome that is computable algorithmically. OK, the algorithm to compute it is very complex, but if we elongate the outcome by increasing the number of computed digits, a time comes where the outcome is more complex than the generating algorithm.

    But then, and only then, our procedure to evaluate dFSCI must shift to the Kolmogorov complexity of the outcome. IOWs the dFSCI of the outcome, however long, becomes, from that moment on, equal to the complexity of the generating algorithm. IOWs the string of the generating algorithm becomes a “compression” of the outcome.

    The interesting point, however, is that the algorithm increases the computed complexity of the same pre-defined function: being equal to the binary digits of pi. It cannot generate complexity linked to a new, original function not coded, either directly or indirectly, in its software.

    So, the functional specification is the true marker of design, but the complexity is necessary to eliminate those simple pseudo functions that a conscious observer could apparently “recognize” in simple non designed configurations.

  323. The main one is that DFSCI is real and reasonably observable and quantifiable.

    Yet still no calculation of p(T|H) for anything in biology. Oh well.

  324. F/N: dFSCI and FSCO/I as demonstrable facts amenable to observation and even quantification, here. With only one empirically reliable cause, intelligently directed configuration aka design. KF

  325. F/N 2: Notice how D-J persistently leaves off he inconvenient little log p(T|H) and the implication of this metric being info beyond a threshold thus opening up assessments of bio info that then address the testable result that things exhibiting FSCO/I (a relevant subset) are consistently designed, with say the cases of 15 protein families on record since 2007 in the literature thanks to Durston, as cases in point, just cf the infographic in the just linked? KF

  326. Wait a sec! You are telling me that you CAN calculate log p(T|H), but you can’t calculate p(T|H)?

    I can help with that. Rather, e can help with that.

    LMAO

  327. kairosfocus

    F/N: dFSCI and FSCO/I as demonstrable facts amenable to observation and even quantification

    And still…

    a Google Scholar search of the mainstream scientific literature for the last 10 years returns:

    ZERO scientific papers using “dFSCI”

    ZERO scientific papers using “FSCO/I”

    ONE scientific paper using “complex specified information” (CSI is too common an acronym) and that was Elsberry and Shallit’s disemboweling of Dembski’s popular-press published claims.

    If FIASCO is so amenable to observation and even quantification then why has no one ever observed or quantified it in any real world biological cases?

  328. fifthmonarchyman: If fact the paper I linked demonstrates that humans are quite good at it.

    Hasanhodzic shows that people are good at distinguishing order. Market returns are not random, but chaotic.

    fifthmonarchyman: Complex Specified information is not computable

    That’s the question, not an answer. If you have such a proof, we’d be happy to look at it.

    fifthmonarchyman: In fact I’m not sure the majority of critics have fully grasped that Darwinian evolution is simply an algorithm and is fully subject to any and all the mathematical limitations thereof.

    While models of evolution are algorithmic, that doesn’t mean evolution is algorithmic. In particular, evolution incorporates elements from the natural environment.

    A simple example may suffice. Algorithms can’t generate random numbers. However, an algorithm can incorporate information from the real world, including randomness.

    fifthmonarchyman: Actually what is objective is the number.

    By definition, a value is not objective if it depends on the individual making the measurement.

    fifthmonarchyman: Don’t be offended if I don’t respond to you as much as you would like.

    You’re under no obligation to defend your position. Readers can make of that what they will.

    gpuccio: Because we know well that no existing designed algorithm, at least at present, can generate a piece of text of that length which has good meaning. Unless the text is already in the algorithm itself.

    It isn’t necessary to have the text in the algorithm, though you do have to have a dictionary, rules of grammar, rhyming, scansion, poetic structure, word relationships, etc. No more than what Shakespeare had in his own mind.

    Let’s say we had an oracle that can recognize whether a string of words has a valid meaning in English. “How camest thou in this pickle?” What the heck does that mean? Nevertheless, it got plenty of laughs on the Elizabethan theater. “I will wear my heart upon my sleeve.”

    Anyway, let’s say we have such an oracle. We might put our phrases before an Elizabethan audience and measure the applause, the same oracle that guided Shakespeare in his writing. Also given that phrases such as “the king” has more meaning than “king” as it is more specific. This is our gargantuan encyclopedia of phrases.

    Now, to make this fit into a computer, let’s reduce our encyclopedia to a subset of this gargantuan encyclopedia. Certainly, it would be even harder on the algorithm, but easier on our memory.

    fifthmonarchyman: I can always make a designed algorithm which can output dFSCI, if I put enough complexity in the algorithm.

    Shakespeare had plenty of ‘dFSCI’ in his mind before writing any sonnets.

    fifthmonarchyman: When we say that algorithms are incapable of producing CSI. It is always assumed that cheating is not permitted.

    You permit the Shakespeare sonnet writer what you won’t permit to the computer algorithm.

    fifthmonarchyman: Yet every proposed algorithm that yields false positives does just that.

    No, not every. Some generate solutions to external problems.

    gpuccio: The important point is: any algorithm which generates meaningful complex language must have that language in itself, either in the oracle or in the rest of the algorithm.

    Sort of like Shakespeare did.

    gpuccio: I think the most important point of all, which goes beyond the discussion about weasel or similar, is: what are the intrinsic limitations of an algorithm, however complex?

    If Shakespeare didn’t know words and rhyme, he wouldn’t have written sonnets.

    gpuccio: And the real meaning of meaning and purpose cannot be coded, because they are conscious, subjective experiences, and only those beings who have those experiences can recognize them.

    Sure. So an unfeeling algorithm could either mimic those feelings, or simply write about something else.

    “hate began here if a heart beat apart”

    kairosfocus: In short, here cumulative selection “works” by rewarding non-functional phrases that happen to be closer to the already known target. This is the very opposite of natural selection on already present difference in function. Dawkins’ weasel is not a good model of what evolution is supposed to do.

    It’s not supposed to be a model of evolution. What it shows is that evolutionary search is much faster than random search. Instead of what you consider non-functional steps, we could have a population of words that are ruthlessly selected for function, no close matches allowed. Do you think we could evolve some long words by this process?

  329. Wrong again Zachriel- Weasel shows how a TARGETED search is faster than a random walk.

  330. Adapa:

    If FIASCO is so amenable to observation and even quantification then why has no one ever observed or quantified it in any real world biological cases?

    We have provided one peer-reviewed paper that does so.

    AGAIN Crick defined biological information and science has determined it is both complex and specified.

  331. Evolutionists still can’t provide any probabilities for their position which relies solely on probabilities. And then, like little children, they try to blame ID for their FAILures.

  332. D-J: That is actually fairly frequent in modelling and analysis. An abstraction or situation in one form is not very amenable to calculation or visualisation, but with a transformation, you are in a different zone where doors open up. Not totally dissimilar to integration by substitutions. Once we know something is information, we have ways to get reasonable values. And oddly, that then enables an estimate of the otherwise harder value by inverting the transformation in this case. (Coming back through an integration procedure is often a bit harder.) For instance, working with complicated differential equations can be a mess. Reduce using Laplace Transforms and you are doing algebra on complex frequency domain variables. Push another step and you are doing block diagram algebra. A bit more and you are looking at pole-zero heavy stretchy rubber sheet plots and wireframes, which allow you to read off transient and frequency response. A similar transform gets you into the Z domain for discrete state analysis with the famous unit delay function and digital filters with finite and infinite impulse responses, with their own rubber sheet analysis . . . just watch out for aliasing. (Did you forget that I spent years more in that domain than the time domain?) As would be obvious, save for the operating hyperskepticism that is in the driving seat. But then in the policy world over the past few weeks, I have been dealing with a few cases like that . . . and what drives me bananas there is the, I don’t like diagrams and graphs retort to an infographic that reduces a monograph worth of abstruse reasoning to a poster-size chart.

    Adapa: Why are you drumbeat repeating what has been adequately answered long since by something open to examination? When a fact can be directly seen, there is no need for peer review panels to affirm it. And in this case, FSCO/I and dFSCI are simply abbreviations of descriptive phrases, and in fact they trace to Wicken’s wiring diagram, functionally rich organisation discussion of 1979 and Orgel’s specified complexity discussion of 1973 as you full well should know. The phenomenon is a fact of observation as blatant as the difference between a volcano dome pushing out ash including sand into a pile, and a few miles away, a child on a beach made from that same dome, building a sand castle.

    KF

  333. Adapa, playing the schoolyard taunt game simply shows a mental level akin to schoolyard bullies — especially when that is to try to dismiss an observable fact as simple as an Abu 6500 fishing reel. I suggest you avoid such in future. KF

  334. Z, the facts of how Weasel was used manipulatively for literally decades speak for themselves. Not quite as bad as Judge Jones viewing Inherit the Wind to set his attitude on the blunder that this accurately reported the Scopes affair, but too close for comfort. And don’t try to deny, I went through that firsthand back in the day and have seen the abuse continue up to uncomfortably close days. Do you really want to go to a rundown of infamous manipulative icons of evo mat ideology dressed up in a lab coat? KF

  335. gpuccio,

    Thank you for the very interesting Hayashi 2006 PLoS ONE reference. I had seen their figure 5 before, but I did not realize the extent to which they had experimental support for their view of the landscape.

    This paper is quite the show-stopper for two assertions that are repeatedly made at UD.

    1) There are islands of function.
    Apparently not:

    The evolvability of arbitrary chosen random sequence suggests that most positions at the bottom of the fitness landscape have routes toward higher fitness.

    I reckon that “most” smacks of mild over-concluding here, but we can say, conservatively, that over ~1% of random sequences have routes towards higher fitness. So much for “islands”.

    2) We can use Durston’s measures of fits to estimate probabilities, as kairosfocus does in his always-linked…
    No, we can do no such thing. Per Hayashi, once we move to higher fitness, there are large numbers of local optima with varying degrees of interconnectedness. These local optima are constrained in a way that differs dramatically from the lower slopes of the hill. This is a total killer for any argument that tries to use extant, optimized proteins to estimate the degree of substitution allowed within less-optimized proteins. Bottom-up approaches are the only valid technique.

    It turns out that I was far more right than I thought I was…

    F/N: I note in passing that k=20 deep-sixes another ID-trope: “overlapping functionality or multiple constraints prevents evolution”. Here each residue interacts with, on average, 20 others. Evolution, unlike a human designer, is unfazed.

  336. kf @ 333

    Fascinating stuff.

    But you accused me thus “Notice how D-J persistently leaves off he inconvenient little log p(T|H)”

    Here’s my point: if you can calculate p(T|H), you can calculate log p(T|H). and vice versa.
    Pointing out that you, kairosfocus, CANNOT calculate p(T|H) is utterly equivalent to pointing out that you CANNOT calculate log p(T|H). For any biological.
    The log transformation brings me no inconvenience whatsoever: it is utterly irrelevant.
    Regaqrding your use of fits to derive log p(T|H), see my comment re Durston in 336 above.

  337. kairosfocus: Z, the facts of how Weasel was used manipulatively for literally decades speak for themselves.

    We read Dawkins. He doesn’t say it’s a complete model of evolution.

    You didn’t answer. Instead of what you consider non-functional steps, we could have a population of words that are ruthlessly selected for function, no close matches allowed. Do you think we could evolve some long words by this process?

  338. kairosfocus said: “Descriptive terms linked to observables and related analyses and abbreviations do not gain their credibility or substance from appeals to authority. Deal with the substance…”

    LOOK WHO’S TALKING! I DID NOT and DO NOT make ANY appeals to authority. YOU, on the other hand, CONSTANTLY make appeals to authority, and YOU portray YOURSELF as THE AUTHORITY ON EVERYTHING. And YOU are AVOIDING the “substance” of the NUMEROUS, SOLID REFUTATIONS of your DICTATORIAL, INCORRECT, and FALSELY ACCUSATORY logorrhea.

  339. DJ:Actually not, as it is fairly easy to get information numbers for DNA, RNA and even proteins, as has been done. That is not the full info content of life forms, but it is a definite subset and gives the material result already. Believe you me once I saw the power of transformations to move you out of a major analytical headache, that was a lesson for life. Of course evaluating Lapalace transforms is itself a mess but the neat thing is that this is reduced to tables that can be applied, and integrals and differentials have particularly simple evaluations. Indeed, in evaluating diff eqn solutions using auxiliary eqns, you are using such transforms in disguise — why didn’t they just use the usual s or p and done. Similarly, going to operators form is the same thing. (I love the operator concept, the Russians make some real nice use of it.) The transformation to information is similarly though much less spectacularly, a breakthrough. For info is amenable to both evaluation on storage capacity of media and by application of statistics of messages. The statistics of the messages, whether text in English or patterns of AA residues for proteins etc, can then tell us a lot about the real world dynamic-stochastic process and the adaptations to particular cases involved. (That is what I was hinting at in talking on real world Monte Carlos. Down that road, systems analysis.) KF

  340. kairosfocus

    Adapa, playing the schoolyard taunt game simply shows a mental level akin to schoolyard bullies — especially when that is to try to dismiss an observable fact as simple as an Abu 6500 fishing reel. I suggest you avoid such in future. KF

    All I did was point out that the parameter you claim is “amenable to observation and even quantification” has never been used in the scientific community. Not once, not ever.

    I would have pointed that out to you on one of your many identical threads crowing about how wonderful FSCO/I is but you bravely closed comments in every one.

  341. Kairosfocus,

    I take your response @340 as an assertion that you can, in fact calculate log p(T|H) for a biological. Care to demonstrate?

    Note that not one of your numerous comments-closed-FYI-FTR posts do this.

  342. KF,

    As we keep telling you, it is utterly trivial to go from P(T|H) to log P(T|H) and back again.

    Logarithms and antilogarithms are easy. P(T|H) is hard. If you can’t calculate P(T|H), you can’t take its logarithm.

    You need to show that you can calculate a true P(T|H) for a biological phenomenon — one that takes “Darwinian and other material mechanisms” into account, to borrow Dembski’s phrase.

    You say you can do it. Let’s see you back up your claim.

  343. Adapa, you full well know you resorted to a schoolyard taunt tactic, as all can see by scrolling up. Twisting terms to create mocking taunts — and here in the teeth of a direct demonstration of the described reality — speaks volumes and not in your favour. Now you have resorted to the brazen denial when called out. Please think about the corner you are painting yourself into.

    KS: By going to information metrics, through log reduction, that opens up a world of direct and statistical info metrics, as you full well know or should know. Game over.

    KF

  344. kairosfocus, you play your malicious, mendacious, libelous, schoolyard bully mental level taunt games:

    “…never mind what evo mat ideologues in lab coats and their fellow travellers want to decree. KF”

    “The resort to such at this late date is a mark of patent desperation. KF”

    “So, while it is fashionable to impose the ideologically loaded demands of lab coat clad evolutionary materialism and/or fellow travellers…”

    “I no longer expect you to be responsive to mere facts or reasoning, as I soon came to see committed Marxists based on their behaviour…”

    “…uniformity reasoning cuts across the dominant, lab coat clad evolutionary materialism and its fellow travellers…”

    “Not quite as bad as Judge Jones viewing Inherit the Wind to set his attitude on the blunder that this accurately reported the Scopes affair, but too close for comfort. And don’t try to deny, I went through that firsthand back in the day and have seen the abuse continue up to uncomfortably close days. Do you really want to go to a rundown of infamous manipulative icons of evo mat ideology dressed up in a lab coat? KF”

    Yet you hypocritically spewed this “…playing the schoolyard taunt game simply shows a mental level akin to schoolyard bullies — especially when that is to try to dismiss an observable fact as simple as an Abu 6500 fishing reel. I suggest you avoid such in future. KF”

    And this: “Personalities via loaded language only serve to hamper ability to understand; this problem and other similar problems have dogged your responses to design thought for years, consistently yielding strawman caricatures that you have knocked over.”

    I suggest you avoid such in future.

  345. R: FYI, appeal to was it in a peer reviewed journal article (actually closely linked terms are and the concept is routine in engineering) is in fact an appeal to authority as gate-keeper. KF

  346. kairosfocus, is your gibberish supposed to mean something? And can you show where I appealed to ANY authority?

    You’ve been challenged to “calculate a true P(T|H) for a biological phenomenon — one that takes “Darwinian and other material mechanisms” into account, to borrow Dembski’s phrase.” Why are you so afraid to “Deal with the substance”?

  347. 348

    Thank you Reality @ 345 for your demonstration of Darwinian Debating Devices #2: The “Turnabout” Tactic.

  348. KF:

    KS:going to information metrics, through log reduction, that opens up a world of direct and statistical info metrics, as you full well know or should know. Game over.

    KF, you can’t take the logarithm of P(T|H) without calculating P(T|H). Game on.

  349. DNA_Jock at #336:

    Thank you for you good comments about that paper. Of course, I don’t agree with all that you say, and I really want to discuss that paper in detail with you, but I think that I need some time and serenity to do that, so I will not answer your points immediately. I will try however to take the discussion as soon as possible.

    For the moment I have not much time, and I still want to monitor the general discussion in this thread, until it is still “hot”. 🙂

    Any thoughts on my #323? I ask in a very open manner, because I have tried there to outline some very general points which are certainly very open to discussion, but IMO extremely important. I just wondered if you have specific opinions on some of them.

  350. Reality: Please do some homework on dynamic-stochastic systems, observability of systems and the issue of inferring path in phase space from observable variables, and more. Think about brownian motion as an observable and then about random walk of molecules in a body of air that is drifting along as part of a wind as what may be inferred, and inde3ed ponder how Brownian Motion contributed to acceptance of the atom as a real albeit then invisible entity. KF

  351. KS< you can analytically deduce log (p(T|H) and see that it is an information metric. Information being observable through various means can then feed in back ways. Where also stochastic patterns can be used to project back to underlying history, statistical factors and dynamics at work. Indeed, that is how info in English text considered as a stochastic system, is estimated. For simple case E is about 1/8 or typical English text. KF

  352. Reality, enough has been said to show the point as just again outlined to KS, which holds for you too. KF

  353. Zachriel:

    I am not sure what to say. I agree on many of your last comments addressed to me, about Shakespeare and similar.

    To be more clear about my personal position on the role of consciousness in algorithmic cognition, I want to say that I absolutely recognize that Shakespeare had a lot of information coming from his environment, his personal history, his experiences, and so on. Much of that experience can certainly be elaborated in algorithmic ways, and there is no doubt that our conscious processes use many algorithmic procedures to record and transform many data.

    My point is different. My point is that being conscious, and having the conscious experience/intuition of meaning (for example, the intuition that something exists which can be considered true, and the basic intuitions of logic, and many other things) and of purpose (the subjective experience that things can be considered desirable or not, and that each conscious representation has a connotation of feeling) and of free will (that we can in some mysterious way influence what happens to us and to the world about us in alternative ways according to some inner choices), all that has a fundamental role in our ability to cognize, to build a map of reality, to output our intuitions to material objects, to design.

    So, there is no doubt that Shakespeare used a lot of data and of data processing, like any of us, but what he did with those data would have never been possible as a simple algorithmic processing of the data themselves. It was the result of how he represented those data in his consciousness, of what he intuited about them, of how he reacted inwardly to them, of how he reacted outwardly as a consequence of his inner representations and reactions.

    All those steps depend on the simple fact that in conscious beings data generate conscious representations and that those conscious representations generate new data outputs. A non conscious algorithm lacks those steps, and is therefore confined to algorithmic processing of data.

  354. PS: If you had bothered to consider context, you would have seen that I have not made empty assertions but can back up every point I have made. Your turnabout based on snip and snipe is revealing. Especially as the point being defended is a schoolyard taunt mockingly dismissive twisting of FSCO/I, a descriptive term that I happened to highlight as an observable fact just a few hours ago, here.

  355. you can analytically deduce log (p(T|H) and see that it is an information metric</blockquote?

    What? It's a log transform of probability. Are you really saying that every time a statistician works in log-space (because it's easier to take sums than products, and it can prevent underflow) they start working on "information"?

  356. Barry, since you’re likely relying on this: “(b) trying to give the false impression that the victim trying to defend himself is the one who started the quarrel.”, maybe you can show that kairosfocus and other ID-creationists are the victims and didn’t “start the quarrel”?

    I have been commenting here for a short time. kairosfocus has been spewing his malicious, mendacious, sanctimonious, libelous, hypocritical, falsely accusatory attacks against “evomats” and their “ilk” and “fellow travelers” for a long time. kairosfocus, you, and the other ID-creationists have been starting and perpetuating quarrels (and worse) from the moment that you and your “ilk” first tried to ‘wedge’ your theocratic religious agenda into science, public education, and politics.

  357. GP, I think the problem here is that on evolutionary materialism contemplation must reduce to computation, but the deterministic mechanistic side of algors is not creative and the stochastic side is not creative enough and powerful enough to account for FSCO/I and particularly spectacular cases of dFSCI such as the sonnets in question. KF

  358. F/N: I think it would be worth the pause to watch: http://www.youtube.com/watch?v.....re=related

  359. 360

    Reality @ 357. Rant noted.

  360. wd400 @356

    you can analytically deduce log (p(T|H) and see that it is an information metric

    What? It’s a log transform of probability. Are you really saying that every time a statistician works in log-space (because it’s easier to take sums than products, and it can prevent underflow) they start working on “information”?

    In all likelihood, yes. Although, if anything we use the partial derivative of that as our information.

  361. This is too funny as evos are oblivious to the fact that they need to provide the H in P(T|H) and they think that actually helps them!

  362. Unguided, gradual evolution posits incremental step-by-step processes to produce the diversity of life and its diversity of intricate systems and subsystems. In the absence of those steps there needs to be probabilities that the steps can occur and in the sequence required. And in the absence of that all you have is a glossy narrative that rivals Shakespeare but doesn’t belong in science.

    So tell us- what is H and show your work. Lead by example, for once.

  363. Zachriel:

    Instead of what you consider non-functional steps, we could have a population of words that are ruthlessly selected for function, no close matches allowed. Do you think we could evolve some long words by this process?

    Yes, if someone wrote a program to evolve words by whatever means, I am sure the program would do so if the programmer was competent. Yes, if organisms are intelligently designed to evolve long proteins, then they should be able to do so if the intelligent designer was competent enough.

    Next 😛

  364. Joe

    Unguided, gradual evolution posits incremental step-by-step processes to produce the diversity of life and its diversity of intricate systems and subsystems. In the absence of those steps there needs to be probabilities that the steps can occur and in the sequence required. And in the absence of that all you have is a glossy narrative that rivals Shakespeare but doesn’t belong in science.

    Creationists are notorious for coming up with really stupid ideas but demanding that science provide a numerical probability and specific steps for evolutionary changes that happened hundreds of millions of years ago has to be among the dumbest. We have ample physical evidence that the events did indeed occur and the mechanisms that caused them. That makes the probability of occurrence 1.0.

    Can you imaging demanding that a geologist provide the exact probability calculations and day by day height measurements for the formation of the Alps or else mountain building by plate tectonics is falsified? That’s exactly how stupid this latest demand is.

    IDers are the only ones whose argument relies on the precise calculations of unknowable probabilities. Yet another reason they are laughed at by established science.

  365. gpuccio: So, there is no doubt that Shakespeare used a lot of data and of data processing, like any of us, but what he did with those data would have never been possible as a simple algorithmic processing of the data themselves.

    That’s your claim, and you may be correct; but you argue that an algorithm can’t generate a sonnet, but restrict the algorithm from having access to the same background information as Shakespeare.

    Based on that, you have to calculate the information gain for the sonnet by subtracting all of Shakespeare’s background knowledge, which was presumably quite extensive. Shakespeare knew Marlowe.

  366. KF:

    KS: you can analytically deduce log (p(T|H) and see that it is an information metric.

    You can’t take the log of P(T|H) unless you know the value of P(T|H).

    Compute the value of P(T|H) for a biological phenomenon, taking “Darwinian and other material mechanisms” into account, as required by Dembski.

    Show your work.

    You claim to be able to do it, so why not do it, for once?

  367. Adapa:

    Creationists are notorious for coming up with really stupid ideas but demanding that science provide a numerical probability and specific steps for evolutionary changes that happened hundreds of millions of years ago has to be among the dumbest. We have ample physical evidence that the events did indeed occur and the mechanisms that caused them.

    You only think that you do. However the peer-reviewed literature is devoid of blind watchmaker explanations.

    You are making stuff up, as usual.

  368. keith s:

    We already know that 600-character posts or sonnets are not formed by pure random variation with no selection.

    The straightforward meaning of the sentence somewhat eludes me. I think you’re saying that a “sonnet” has been “selected” for. But it’s “artificial” selection, and not “natural” selection. Darwin equates the one with the other. But is he right?

    I’ve already stated that NS does nothing more than “eliminate” successors. The process of building up is still “random.”

    Here’s another way of looking at it: you have a million monkeys typing away at a typewriter, and every time that they don’t come up with “methinks it is a weasel,” you throw it away. How does that help the monkeys?

    The only thing that could help the monkeys is if you substituted keys: e.g., you replace the letter “y” with “ea,” you substitute the letter “x” with “et,” and you substitute the letter “p” with “it”, etc. However, this involves “active” use of intelligence.

  369. Adapa:

    Creationists are notorious for coming up with really stupid ideas but demanding that science provide a numerical probability and specific steps for evolutionary changes that happened hundreds of millions of years ago has to be among the dumbest. We have ample physical evidence that the events did indeed occur and the mechanisms that caused them. That makes the probability of occurrence 1.0.

    In Boston this week, some man named “Paul Revere” won the state lottery. You’re response is: “Of course!”

    Isn’t this a silly way of looking at probabilities?

  370. keith s:

    The dFSCI number simply confirms that obvious fact, using a calculation that was developed and understood long before gpuccio was born.

    Indeed. Thus giving us confidence when it is not so “obvious.”

  371. Let me state the obvious. KF doesn’t want to calculate a true P(T|H) for a biological phenomenon because he can’t do it.

    This shouldn’t be a surprise at all. Dembski introduced the idea of design detection based on P(T|H) at least as early as 2001. Thirteen years ago!

    Imagine if it had actually worked. By now, there would have been dozens (at least) of worked-out examples showing that various biological structures were designed. Dembski himself would have done a bunch — CSI was his baby, and he would have wanted to demonstrate its power.

    Instead, nothing. No worked-out examples. In fact, Dembski himself appears to be (understandably) ashamed of CSI. He isn’t working on it and he doesn’t use it. It barely gets mentioned in his new book. Dembski gave up and is focusing his attention on his “search for a search” stuff with Marks.

    CSI failed for a lot of reasons, but perhaps the most embarrassing was that Dembski himself couldn’t calculate it, because he couldn’t calculate P(T|H) by his own definition of H.

    KF can’t either, which is why he will dodge the question.

  372. PaV: I’ve already stated that NS does nothing more than “eliminate” successors. The process of building up is still “random.”

    We can show that such a process can find solutions to complex problems.

  373. PaV

    In Boston this week, some man named “Paul Revere” won the state lottery. You’re response is: “Of course!”

    Isn’t this a silly way of looking at probabilities?

    Your answer highlights the lack of understanding of probability by Creationists. Assuming only one “Paul Revere” bought a ticket his chances of winning were identical to everyone else who bought a ticket. If you were predicting before the draw that PR would win his chances would be 1/tickets sold.

    You guys look at one result after the fact then confuse it with a before the fact prediction and claim “ZOMG that result is too improbable it must be designed!!” You could make the same erroneous claim with anyone who won.

  374. PaV,

    Here’s another way of looking at it: you have a million monkeys typing away at a typewriter, and every time that they don’t come up with “methinks it is a weasel,” you throw it away. How does that help the monkeys?

    If that’s how you think evolution works, then no wonder you’re a IDer.

    Please read an introductory textbook on evolutionary biology, PaV.

  375. DNA_Jock:

    …you have to be really, really, really cautious if you are applying a post-facto specification to an event that you have already observed, and then trying to calculate how unlikely that specific event was. You can make the probability arbitrarily small by making the specification arbitrarily precise.

    From this statement, I would conclude you haven’t read Dembski’s NFL book. What is needed are two things: (1) recognition of the pattern, and (2) knowledge of the mechanism by which the pattern is formed—IOW, you have to be able to calculate the probability of the “pattern” happening by ‘chance’ given the mechanism utilized in developing the “pattern.”

    At Las Vegas, they know the “mechanism,” and they know what “patterns” are to improbable to be happening by chance.

    In NFL, by Dembski, he uses the Caputo example of ballot tampering. And he calculates the odds of a ‘Democrat being place first on a ballot of “x” names’ and so forth. Caputo, I believe, was convicted on probabilities calculated in just this way.

    The problem that has been thrown in the face of Dembski is this: he has no basis upon which to assume that the DNA string of nucleotides in the genome represent a i.i.d–a uniform distribution. And since he has no assurance of said distribution, the NFL theorems do not, and cannot, apply.

    I will now present PROOF that the genome is, indeed, uniformly distributed across genome space!!!!!

    Trumpets, please!!! Drum roll!!!!

    In the over ten years that UD has been around, no one has said that the mutation rate of an organism’s genome is NOT uniformly distributed. When population geneticists do their calculations here, they assume that the mutation rate is free to occur throughout the entire range of the genome.

    Now, it is true, that qualifications can, and must, be made. However, that the starting point of their calculations is always using the assumption of uniformity of mutations along the string—at least when we’re dealing with SNPs—this implicity demonstrates that the assumption of population geneticists is that a uniform probability distribution applies to the genome.

    The only time they would contest this would be if they thought it would somehow undermine ID. IOW, there’s a tendency to be disingenuous.

  376. keith s:

    I have a degree in biology from UCLA.

    Do you know what the “fundamental theorem of natural selection” is, and who developed it?

    There’s a follow-up. Beware.

  377. keith s thinks it’s our problem that he cannot provide H. No wonder he’s an evo

  378. PaV:

    I have a degree in biology from UCLA.

    Then you have no excuse for not understanding evolution better than you do.

    Evolution works via heritable variation and selection (plus drift). Heritable variation is completely missing in your monkey example, and you’re modeling the fitness landscape as absolutely flat with a single sharp peak.

    P.S. Yes, I know about the fundamental theorem of natural selection. Please make your case, after you have dealt with the inadequacies of your monkey example.

  379. keith s:

    P.S. In my life there have only been three books I’ve thrown down in disgust.

    The first was “Origin of Species” when Darwin dares to say that “species give rise to genera, genera to families, families to orders, and orders to classes.” (from memory) This is just silliness. Why? Because he has no justification whatsoever for stopping at “classes”!

    If you can’t get ‘higher’ than a “class,” then how do you get a “phylum”? So, where do the phyla come from? Are they there from the beginning? If so, how did they form?

    Well, of course, Darwin thinks he’s off the hotseat because at the end he says: “There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one . . .”

    Please explain who is doing this “breathing.” Darwin doesn’t. And then, so many editions later—when no one is watching, he drops the phrase.

    The second book was Dawkin’s The Blind Watchmaker when, after 40 pages or so of wandering around, he, out of nowhere, claims that “if the ant took one small step in the right direction and was rewarded, it could arrive at the fox in no time at all.” (paraphrasing)

    This, too, is nonsense. How do you reward the biomorph, and with what.? But the worst of all is: “in the right direction!!! But NS is blind. Evolution is blind and random. There is NO direction!

    And the third book I dropped in disgust was Ernst Mayr’s What Evolution Is. Here it was, here was the grandmaster at work, having laid the foundation for the transmutation of species and . . . . . . . what do we get? Gobbledygook. Hemming and hawing, kind of this, and throw in that, mix it up—you know, like a “tornado passing through a junkyard and producing a 747”—-words of atheist Sir Fred Hoyle.

    So, please, if you can, explain to us how evolution takes place. Give us the steps, show us examples. And, of course, we’ll be very interested in all the “intermediate forms” that Darwinism supposes.

  380. keith s:

    You say to make my case after I deal with the inadequacies of the my monkey case.

    However, there are no inadequacies.

    R.A.Fisher, the architect of what we know as neo-Darwinism, formulated this “fundamental theorem.” Do you know what this “theorem” is based upon?

  381. 382

    Zac said

    Based on that, you have to calculate the information gain for the sonnet by subtracting all of Shakespeare’s background knowledge,

    I say,

    stay tuned……

    I believe there way to separate original CSI in the sonnet from the CSI that comes from background information.

    step one… Remove the sequence from it’s context and represent it as a serious of numeric values.

    step two… see if an algorithm can reproduce the pattern in those values by any means whatsoever sufficiently enough to fool an observer.

    Of course with the understanding that the algorithm can’t reference the original string.

    I’ve been playing around with this for a few weeks and so far it seems to work.

    Peace

  382. PaV,

    You say to make my case after I deal with the inadequacies of the my monkey case.

    However, there are no inadequacies.

    You must be joking. Here is your monkey example:

    Here’s another way of looking at it: you have a million monkeys typing away at a typewriter, and every time that they don’t come up with “methinks it is a weasel,” you throw it away. How does that help the monkeys?

    Where is the heritable variation in that scenario?

  383. PaV,

    It’s hard to believe that you actually have a degree in biology. School must have been a nightmare of cognitive dissonance for you.

  384. The first was “Origin of Species” when Darwin dares to say that “species give rise to genera, genera to families, families to orders, and orders to classes.” (from memory) This is just silliness. Why? Because he has no justification whatsoever for stopping at “classes”!

    He had a pretty good reason — there were no Phyla is the classification used at the time. I guess he could’ve gone to Kingdom, but don’t quite see why you’d throw a book aside for not reaching the end of a series of anmes.

    R.A.Fisher, the architect of what we know as neo-Darwinism, formulated this “fundamental theorem.” Do you know what this “theorem” is based upon?

    .It’s based on some observations about how fitness changes in a population relative to genetic diversity.

  385. 386

    gpuccio said,

    The interesting point, however, is that the algorithm increases the computed complexity of the same pre-defined function: being equal to the binary digits of pi. It cannot generate complexity linked to a new, original function not coded, either directly or indirectly, in its software.

    I say,

    Another interesting thing is that the algorithm just keeps pluggin along for eternity. Only a non computable conscious agent has the ability to halt the program and discover that anything whatsoever of interest has been produced at all.

    Peace

  386. You say to make my case after I deal with the inadequacies of the my monkey case.

    Here’s one: in the real world fitness landscapes aren’t points of perfect fitness surrounded by field of zero fitness.

  387. keith s:

    Before the insults, try thinking things through.

    Where is the heritable variation in that scenario?

    The whole point of the analogy is to highlight that NS ONLY functions when something of value has been arrived at. If the phrase “methinks it is a weasel” is essential to life, then all you have are dead descendants. Nothing is inherited until such time as the entire phrase is arrived at—randomly!!!

    What you want to allege are available are a whole host of viable intermediates. Where, in the fossil record, or among extant species, do we find such “intermediates”? Nowhere.

    Show me those “intermediates” and you will make me a believer in Darwinism. But—speaking of “cognitive dissonance”—you know, there is something called the “Cambrian Explosion.”

  388. PaV: However, that the starting point of their calculations is always using the assumption of uniformity of mutations along the string—at least when we’re dealing with SNPs—this implicity demonstrates that the assumption of population geneticists is that a uniform probability distribution applies to the genome.

    Uniformity of mutation is not the same as uniform probability distribution as applied to the genome.

    PaV: The first was “Origin of Species” when Darwin dares to say that “species give rise to genera, genera to families, families to orders, and orders to classes.” (from memory) This is just silliness. Why? Because he has no justification whatsoever for stopping at “classes”!

    Don’t see that quote anywhere. In any case, Darwin doesn’t stop at “classes”, but considers whether “the theory of descent with modification embraces all the members of the same great class or kingdom.” Then he goes so far as to consider whether “all animals and plants are descended from some one prototype.” You may want to reread ‘Origin of Species’. It is considered one of the most important scientific works in history.

    PaV: Evolution is blind and random.

    Natural selection tends to be nearsighted, but is more than capable of directing adaptation.

    fifthmonarchyman: step one… Remove the sequence from it’s context and represent it as a serious of numeric values. step two… see if an algorithm can reproduce the pattern in those values by any means whatsoever sufficiently enough to fool an observer.

    That doesn’t remove the necessity of background knowledge. The original sequence is just the original sequence encoded.

  389. It is amusing to see this discussion. Some people don’t seem to realize that every time they post their comments they do in practice exactly what they are trying to refute theoretically, i.e. they infer design by reading their interlocutors’ comments.

    Since it is possible to tell jibberish from meaningful text, so it is also possible to tell functional protein sequences from non-functional. That is objective science.

    Functions can swap or co-opt, true. But how likely is that in practice given the sparseness of functionality in protein state space?

    Keith, either your fitness should encode functional information (irrespective of how we measure it) or you have a blind unguided search.

    Your appeal to selection does not save the day IMHO. What is fitness? How do you define it? I am sure as soon as you start defining it in biological context in practice, you will have to encode functional information in there if you want to make it practically feasible.

    Data without the Turing machine is meaningless and so is the machine without data it is designed to process.

  390. PaV: What you want to allege are available are a whole host of viable intermediates. Where, in the fossil record, or among extant species, do we find such “intermediates”? Nowhere.

    This is a bit off-topic, but you can look at adaptations in cetaceans for some pretty obvious intermediates.

  391. KS:

    With all due respect, you are off on a side track to the material question. A red herring led off to a strawman.

    I already pointed out that we can simply move forward through a log reduction analytically then have an info metric. That allows us to being to bear empirical observations on info based on state observation and statistics, which can allow us to go back through inverse logs, if that is what you want. Go get you Durston’s result from 2007 and a calculator with ability to do 2^n. Where info values are – log probabilities.

    Just to show the point by making a calc, use:

    Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond

    log2(1/p) = 1285, so 2^(1285) = 1/p

    2^1285 ~ 6 *10^386. p ~ 1.5 *10^-387

    Not likely by any reasonable chance hyp.

    Of course, the debate is really what are the possible imagined hyps that are relevant, to which the answer is, first search for golden search that breaks the odds is a search in the power set of the original set, where for w ~ 10^386 possibilities we are dealing with a golden search space of 2^(6*10^386) possibilities, calculator smoking territory. No reasonably likely chance based variation that has to walk across AA config space to find the domain in which C-S lies, constrained by sparse search, is reasonably feasible, and the pattern of AA sequence space is such that there are not going to credibly be easy stepping stones of short Hamming distance apart; that is, hoped for Weasel like cumulative steps (ignoring for the moment Weasel’s targetting and reward of non-function) are not credible. And in fact we should realise that 3 of 64 possible randomly chosen codons are stop codons. What is a short random step away is a STOP.

    Which is probably a built-in backup failsafe.

    That is, the usual out of calling on incremental success to climb the fitness hill, or exaptation of proteins doing something else etc etc, do not look very feasible. No reasonable chance hyp is likely to deliver a search for a golden search.

    As already pointed out.

    But that is not the root problem, the real problem is that we deal with only sparse possible search of very large config spaces with deeply isolated islands of function.

    We already know from sampling theory that in such cases the odds of hitting on islands of function are negligibly different from zero in times and scopes or atomic matter relevant to the sol system or the observed cosmos.

    For just 500 bits, the sol system’s atomic resources can sample about 1 straw to a cubical haystack comparably thick to our galaxy. Go to 1,000 and that swallows up observed cosmos resources. (Remember, the first pivotal case is Darwin’s pond or the like and the question is to pull out of available physical, thermodynamics and chemical interactions, a plausible framework for blind watchmaker thesis evo that ends in a gated, encapsulated metabolising, protein using cell with coded D/RNA and von Neumann self replication all to be explained. Enormous functionally specific complex organisation and associated information.)

    We know from the dynamics of complex interactive systems exhibiting FSCO/I, that correct organisation sharply constrains possible configs leading to isolated islands of function with vastly more non functional possibilities. Similar sparse search challenges obtain on the case of moving from one island to a different archipelago, i.e. a novel body plan or a few dozen.

    So, no we do not need to calculate p(T|H) though we can work back to it from information metrics that reveal what amount of real world exploration of e.g. proteins in AA space is possible and recorded across the world of life.

    The observed pattern is well known, thousands of diverse structurally isolated protein fold-function clusters, a lot of which have only a few.

    That, in light of sparse search, points to only a limited role for stochastic generation of folds. Which means that the other main engine of high contingency must be seriously considered, design.

    There’s been much huffing and puffing and blowing at Dembski’s CSI metric, but in the end all it needed to do was to establish that we are dealing with an info beyond a threshold situation. The info can be empirically estimated, as can reasonable threshold values.

    The result is, that the workhorse molecules of life are grossly unlikely to emerge by blind chance and/or mechanical necessity, and without hundreds of diverse proteins, no living cell.

    FSCO/I, on the other hand, routinely comes about by intelligently directed configuration.

    KF

  392. This is a bit off-topic, but you can look at adaptations in cetaceans for some pretty obvious intermediates.

    Intermediate in form doesn’t mean intermediate in terms of evolutionary relationships.

  393. As predicted, KF is dodging the question.

    He cannot calculate P(T|H) for a biological phenomenon, and he knows it.

  394. PaV @ 377 wrote:

    I have a degree in biology from UCLA.

    You might want to consider asking for your money back.
    PaV @ 376 wrote:

    In the over ten years that UD has been around, no one has said that the mutation rate of an organism’s genome is NOT uniformly distributed. When population geneticists do their calculations here, they assume that the mutation rate is free to occur throughout the entire range of the genome.

    You clearly haven’t been paying attention.

    DNA_Jock @ 243, November 1st, on the elephant thread:
    I would use the word stochastic; I agree that modeling the individual transitions as uniform p is okay for practical purposes, although you might want to distinguish transitions from transversions.

    Note that I was pointing out that although gpuccio’s assumption re uniform p was wrong, I did not raise the issue to “somehow undermine ID”. Rather I allowed that his assumption was okay for practical purposes (in the particular context that we were discussing). So your “there’s a tendency to be disingenuous” insult misses the mark. If I were kf, I would demand a retraction. Heh.

    Now I’ll offer you a little leeway, in that the transition/transversion distinction doesn’t affect the probability that nucleotide #234,123 will mutate, it merely biases the possible outcomes.

    However, you are still hopelessly wrong, since the probability that CpG will mutate is higher than for any other dinucleotide. With your degree in biology from UCLA, you should know this.
    Now that I think about it, given your demonstrated inattention to detail, perhaps UCLA was not at fault here. Rather than you asking them for your money back, perhaps they should be asking you for your diploma back.

    I promise to deal with your convoluted logic re Dembski’s NFL just as soon as I have stopped laughing.

  395. Well keith, Kf thinks he is calculating a p(T|H) using Durston’s fits data, but Dembski would not approve, were he here.
    Durston’s fit measures the average reduction in uncertainty associated with a residue, where the target is the sequence itself, plus its immediate neighbors of ~equal fitness. Of course the target should be ALL sequences with equal or greater fitness.
    And Durston’s H is a random independent draw from the entire sequence space, which is so far removed from Dembski’s “appropriate chance hypothesis” as to be laughable.

    So, still no calculation of p(T|H) for any biological. Ever.

  396. 5th:

    Another interesting thing is that the algorithm just keeps pluggin along for eternity. Only a non computable conscious agent has the ability to halt the program and discover that anything whatsoever of interest has been produced at all.

    Yep.

    Endless numbers of monkeys, furiously typing away,
    Might make something worthy of Shakespeare one very fortunate day.
    But which of those studious simians will then stand up and say,
    “By Jove, this is quite good? It would make a fine play!”

  397. wd400:

    He had a pretty good reason — there were no Phyla is the classification used at the time. I guess he could’ve gone to Kingdom, but don’t quite see why you’d throw a book aside for not reaching the end of a series of anmes.

    Because it’s illogical. The only way that this makes any kind of sense is if you assume a few things. Darwin assumed that the earth was quasi-eternal, influenced by Dutton at Edinburgh. We know he was wrong about that. The second assumption is that along this quasi-eternal time line, species simply morph one into the other, so that what, at one point in time (one manifestation of his branching diagram), is a “species” becomes over some long time interval a “genus,” only to then, over the next time frame, become part of a “family,” and then an “order” and then BACK to being a “species,” now that all sorts of its siblings have died off, and it’s ready for more diversification of “character.” (The notion of extinction is absolutely necessary for his view) Darwin sees this as almost endless. It’s the “special theory of relativity” applied to taxonomy.

    While in 1859 you might countenance such a supposition, from the 21st century this looks like rubbish. Hence the book was thrown down in disgust.

    Here’s Darwin himself:

    I see no reason to limit the process of modification, as now explained, to the formation of genera alone. If, in our diagram, we suppose the amount of change represented by each successive group of diverging dotted lines to be very great, the forms marked a14 to p14, those marked b14 and f14, and those marked o14 to m14, will form three very distinct genera. We shall also have two very distinct genera descended from (I); and as these latter two genera, both from continued divergence of character and from inheritance from a different parent, will differ widely from the three genera descended from (A), the two little groups of genera will form two distinct families, or even orders, according to the amount of divergent modification supposed to be represented in the diagram. And the two new families, or orders, will have descended from two species of the original genus; and these two species are supposed to have descended from one species of a still more ancient and unknown genus.

    How do you get to a “class” or a “phylum”?

    This is the problem. Why? Because when species are arranged, they’re arranged into a heirarchy of either ‘clades’ or defining characteristics of what are assumed to be related species. The “class/phylum” would contain the entirety of all demarcated characteristics, which would be sub-divided into “orders”, which are subdivided, etc. Each division will include a smaller amount of characteristics than found in the grouping above.

    But, using Darwin’s methodology and thinking, would mean that the only way that you can arrive at a “class/phylum” level would be after a greater period of “diversification,” which would place the “phylum” at the “top” of the nested hierarchy in terms of geological time. But the fossil record is just the opposite. And we know it. And I knew it. And so the book got tossed.

    This is exactly Meyer’s argument in his Darwin’s Doubt.

    It’s based on some observations about how fitness changes in a population relative to genetic diversity.

    That’s what it tries to describe. That’s not what it’s based on.

  398. R:

    Perhaps, you would be well-advised to ponder a couple of def’ns from dictionaries before the current wave of evolutionary materialist scientism (a fair comment description of an ideology and associated school of thought, description not namecalling . . . cf below):

    science: a branch of knowledge conducted on objective principles involving the systematized observation of and experiment with phenomena, esp. concerned with the material and functions of the physical universe. [Concise Oxford, 1990 — and yes, they used the “z” Virginia!]

    scientific method: principles and procedures for the systematic pursuit of knowledge [”the body of truth, information and principles acquired by mankind”] involving the recognition and formulation of a problem, the collection of data through observation and experiment, and the formulation and testing of hypotheses. [Webster’s 7th Collegiate, 1965]

    Contrast this, from only a decade after the 1990 OED defined science as above and at about the time of notorious tactics used by the same National Science Teachers Association in Kansas, which comes from a board level discussion:

    Although no single universal step-by-step scientific method captures the complexity of doing science, a number of shared values and perspectives characterize a scientific approach to understanding nature. Among these are a demand for naturalistic explanations supported by empirical evidence that are, at least in principle, testable against the natural world. Other shared elements include observations, rational argument, inference, skepticism, peer review and replicability of work . . . .

    Science, by definition, is limited to naturalistic methods and explanations and, as such, is precluded from using supernatural elements [–> a strawman laced with implicit hostilities, the issue has been that here are reasonable and tested reliable signs that point to ART not blind chance and mechanical necessity as best causal explanation for certain things in the natural world, and that has been what has been on the table since Plato in The Laws Bk X 360 BC] in the production of scientific knowledge. [NSTA, Board of Directors, July 2000. Emphases added.]

    In short, ideological imposition on the longstanding historically rooted definitions that shaped how major dictionaries reported on what science and its methods were in the 10 – 40 years before the US NSTA tried to define how teachers should teach students about science.

    But that reflects the wider issue that Harvard Biologist Lewontin reported as a member of the scientific elites:

    . . . to put a correct view of the universe into people’s heads we must first get an incorrect view out . . . the problem is to get them to reject irrational and supernatural [–> notice again] explanations of the world, the demons [–> notice loaded word, echoing Sagan in the book being reviewed] that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth [–> Already, we see the ideology of scientism defined in a nutshell, a priori evolutionary materialism will follow, it being well known that evolutionary materialist scientism uses presumed powers of evolutions to account for the observed cosmos from hydrogen to humans. NB: the claim advanced is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting]. . . .
    It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [–> another major begging of the question . . . multiplied by the imposition of a claimed monopoly of “Science” on begetting truth. Thus, evolutionary materialist scientism, which imposes materialistic conclusions before facts can speak . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [–> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door. [“Billions and Billions of Demons,” NYRB, January 9, 1997. If you imagine this is “quote mining” kindly cf the linked more extended, annotated cite.]

    No wonder Philip Johnson replied:

    For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.”

    . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]

    There is such a thing as fair and reasonably justified comment, R, and the history above (and much more) shows what has been going wrong. There really has been an evolutionary materialist magisterium that gained boldness in the post Sputnik years, and in recent decades has sought to impose a fairly radical a priori ideology on both science and science education.

    And so forth, but I will not further go into side issues and personalities in this thread.

    Your turnabout attempt fails.

    KF

  399. DJ, KS et al, it seems that we are back to, if you calculate and show, we deny. If you point out the exponentially more difficult nature of a search for a golden search that is blindly discovered and magically outperforms reasonable random searches that could reasonably produce observed diversity in proteins, that is not even noticed. If you point out why the calc is not needed as the information metric rooted in observable state and/or statistics of observable variation of proteins, we dismiss or ignore. If you show the root challenge from OOL forward, we are not interested. Such, sadly speaks for itself. KF

  400. PaV @ 376

    DNA_Jock:

    …you have to be really, really, really cautious if you are applying a post-facto specification to an event that you have already observed, and then trying to calculate how unlikely that specific event was. You can make the probability arbitrarily small by making the specification arbitrarily precise.

    From this statement, I would conclude you haven’t read Dembski’s NFL book. What is needed are two things: (1) recognition of the pattern, and (2) knowledge of the mechanism by which the pattern is formed—IOW, you have to be able to calculate the probability of the “pattern” happening by ‘chance’ given the mechanism utilized in developing the “pattern.”

    Here’s the fun thing about logic, PaV. You can arrive at a factually correct conclusion via faulty logic.
    There is no contradiction between what I said, and your paraphrase of Dembski’s point. Although if Dembski really said that you need to know the mechanism by which the pattern was formed, it sounds like a death knell for ID.
    So I haven’t read Dembski’s NFL. But I am curious. Maybe there is something else in the book that refutes one or both of my two statements above, which would thus restore your logic. Do tell. Could you also please describe to me how he applies the No Free Lunch Theorem to biological “Search”. In order to not be disingenuous, I should warn you that this latter request is a trap.

  401. DNA_Jock:

    You’ve shown your true colors with your mocking style. So we know what kind of character you are.

    Look, you should be smart enough to realize that what I wrote as being a “proof” is nothing like a “proof.” That is so obvious that you should have been looking for something else.

    Your response is exactly what I was getting at:

    Note that I was pointing out that although gpuccio’s assumption re uniform p was wrong, I did not raise the issue to “somehow undermine ID”. Rather I allowed that his assumption was okay for practical purposes (in the particular context that we were discussing).

    No, it’s not an i.i.d. But it’s almost like that. Yes, like CpG’s and other such instances, we know that transitions/transversions aren’t uniform. But, overall, given the entirety of the genome and what we see bases doing, it’s a close approximation.

    And, that’s the point. Population guys don’t bother with it because it usually doesn’t make a difference.

  402. PaV,

    I’ve no idea what you talking abut with regard taxonomy. What does this mean?

    But, using Darwin’s methodology and thinking, would mean that the only way that you can arrive at a “class/phylum” level would be after a greater period of “diversification,” which would place the “phylum” at the “top” of the nested hierarchy in terms of geological time.

    Phyla (and other lineages) arise by speciation followed by divergence. That’s what Darwin was saying, that’s what modern evolutionary biology has shown.

    What do you think the fundamental thereon of natural selection is based on — you seem to be dying to shock us with this revelation…

  403. PaV: How do you get to a “class” or a “phylum”?

    By the same process of diversification from a common ancestor. A class is just a successful lineage that has diversified over a long period of time.

  404. DNA_Jock:

    Did none of this tip you off?

    I will now present PROOF that the genome is, indeed, uniformly distributed across genome space!!!!!

    Trumpets, please!!! Drum roll!!!!

  405. BTW: It is a commonplace in modelling and math work as well as experimental sciences to use transformations of quantities and variables to make them amenable for further work. I discussed above on Laplace transforms and Z transforms. A common simple case is use of log-log and log-linear graph paper. Some would add plotting graphs too. And, algebraic and calculus manipulation of variables while reserving statistics and calcs for a later stage is part of good praxis, not least as it gets out of error propagation problems. But in this case the dominant reason is that moving to the logged out expression reveals more plainly what it is doing, extracting a measure of info beyond a threshold. Info being more directly accessible empirically. And, the relevant chancy hyps inn play at OOL have to do with thermodynamics and chemical kinetics that are long since discussed. At onward cases, design is already sitting at the table and the pattern of protein folds in AA space and distributions of AAs in proteins, already strongly point to islands of function not reasonably accessible to sparse search constrained by available time and atoms. The hoped for grand continent of incrementally improving function across a broad tree of life severely lacks empirical warrant. So it is uite reasonable to apply reasonable chance hyps that may be a bit biased but not too much so and not in ways correlated to finding THOUSANDS of islands of function in AA sequence config possibilities space. KF

  406. PS: Thermo-D would point to spontaneous breakdown, for excellent reasons. There is a reason why protein assembly in the cell uses such a specifically constraining step by step numerically controlled complex assembling system in the Ribosome. The unconstrained trend would not go there.

  407. 408

    Zac said,

    That doesn’t remove the necessity of background knowledge. The original sequence is just the original sequence encoded.

    I say,

    Agreed but the programer is now free to use the same background information as well as any other information he can think of as long as it does not come from the original string.

    In theory he can access all the CSI in the universe except that which is original in the designer of the original string.

    peace

  408. fifthmonarchyman: Agreed but the programer is now free to use the same background information as well as any other information he can think of as long as it does not come from the original string.

    You mean the encoding is secret or something? So, do you think Shakespeare could do it?

  409. DNA_Jock:

    So I haven’t read Dembski’s NFL. But I am curious. Maybe there is something else in the book that refutes one or both of my two statements above, which would thus restore your logic. Do tell. Could you also please describe to me how he applies the No Free Lunch Theorem to biological “Search”. In order to not be disingenuous, I should warn you that this latter request is a trap.

    Why don’t you buy the book, or check it out from a library, and read for yourself?

    BTW, I didn’t say that my statements “refuted” what you had written, did I!
    Well, DNA_J, did I say that?

    I was merely pointing out that that is NOT how Dembski’s method works. What I said was that it was clear that you hadn’t read Dembski’s book. And you have made it clear that I was right in reaching that conclusion.

    I then WENT ON to tell you the two things that are needed for Dembski’s method to apply.

    Yes, Dembski has since been criticized because in many situations the actual mechanism, and the probability distribution associated with it, can be hard to know. This is a weakness. But I don’t think it invalidates his method; it only demonstrates its limitation.

    When it comes to biological entities, IIRC, Dembski uses, or assumes, a uniform distribution over the bases in making his calculations. If you want to be picayune, yes, indeed, it is not such a distribution. But all of science relies on approximations. Nothing is exact. The mathematics are to complicated to do without them. And, they’re all over the place. It’s only ID that is raked over the coals about these kinds of things.

  410. KF,

    You claim that you can identify instances of design by calculating FSCO/I.

    The FSCO/I equation requires the calculation of P(T|H), where H includes “Darwinian and other material mechanisms”, per Dembski.

    If you can’t calculate P(T|H), you can’t calculate FSCO/I.

  411. PaV at 380 comments:

    In my life there have only been three books I’ve thrown down in disgust.
    The first was “Origin of Species” when Darwin dares to say that “species give rise to genera, genera to families, families to orders, and orders to classes.” (from memory) This is just silliness. Why? Because he has no justification whatsoever for stopping at “classes”!
    If you can’t get ‘higher’ than a “class,” then how do you get a “phylum”? So, where do the phyla come from? Are they there from the beginning? If so, how did they form?
    Well, of course, Darwin thinks he’s off the hotseat because at the end he says: “There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one . . .”
    Please explain who is doing this “breathing.” Darwin doesn’t. And then, so many editions later—when no one is watching, he drops the phrase.

    To which wd400 at 385 retorts,,,

    He had a pretty good reason — there were no Phyla is the classification used at the time. I guess he could’ve gone to Kingdom, but don’t quite see why you’d throw a book aside for not reaching the end of a series of (names).

    Which is, given the fact that wd400 is intelligent, to purposely miss the point that PaV was making. The point PaV was making is that the highest rankings in classification is suppose to, in Darwin’s ‘bottom up’ scheme of things, not be reached until a long slow process of gradual accumulation of changes. But the biological classification scheme itself presupposes a ‘top down’ structure that is opposite of what Darwin claimed.

    Darwin’s claim again is as such:

    “species give rise to genera, genera to families, families to orders, and orders to classes.” (from memory)

    Yet the actual hierarchy of biological classification itself is as such:

    Life, Domain, Kingdom, Phylum, Class, Order, Family, Genus, Species
    http://upload.wikimedia.org/wi....._vflip.svg

    As they use to ask on Sesame Street when I was growing up, can you tell what does not belong in this picture?
    In Darwin’s ‘bottom up’ scheme species were first. Yet in the actual classification species are last!
    Moreover, the ‘top down’ pattern, in which species appear last, which is completely antithetical to Darwin’s ‘bottom up’ scenario, is, more or less, what we actually observe in the fossil record.

    The Ham-Nye Creation Debate: A Huge Missed Opportunity – Casey Luskin – February 4, 2014
    Excerpt: “The record of the first appearance of living phyla, classes, and orders can best be described in Wright’s (1) term as ‘from the top down’.”
    (James W. Valentine, “Late Precambrian bilaterians: Grades and clades,” Proceedings of the National Academy of Sciences USA, 91: 6751-6757 (July 1994).)
    http://www.evolutionnews.org/2.....81911.html

    Investigating Evolution: The Cambrian Explosion Part 1 – (4:45 minute mark – upside-down fossil record) video
    http://www.youtube.com/watch?v=4DkbmuRhXRY
    Part 2 – video
    http://www.youtube.com/watch?v=iZFM48XIXnk

    Chinese microscopic fossil find challenges Darwin’s theory – 11 November, 2014
    Excerpt: One of the world’s leading researchers on the Cambria explosion is Chen Junyuan from the Nanjing Institute of Palaeontology and he said that his fossil discoveries in China show that “Darwin’s tree is a reverse cone shape”. A senior research fellow at Chengjiang Fauna [fossil site], said, “I do not believe the animals developed gradually from the bottom up, I think they suddenly appeared”.
    http://www.scmp.com/comment/le.....ins-theory
    “Darwin had a lot of trouble with the fossil record because if you look at the record of phyla in the rocks as fossils why when they first appear we already see them all. The phyla are fully formed. It’s as if the phyla were created first and they were modified into classes and we see that the number of classes peak later than the number of phyla and the number of orders peak later than that. So it’s kind of a top down succession, you start with this basic body plans, the phyla, and you diversify them into classes, the major sub-divisions of the phyla, and these into orders and so on. So the fossil record is kind of backwards from what you would expect from in that sense from what you would expect from Darwin’s ideas.”
    James W. Valentine – as quoted from “On the Origin of Phyla: Interviews with James W. Valentine”

    The unscientific hegemony of uniformitarianism – David Tyler – May 2011
    Excerpt: The pervasive pattern of natural history: disparity precedes diversity,,,, The summary of results for phyla is as follows. The pattern reinforces earlier research that concluded the Explosion is not an artefact of sampling. Much the same finding applies to the appearance of classes. These data are presented in Figures 1 and 2 in the paper.
    http://www.arn.org/blogs/index.....niformitar

    Moreover, disparity (large differences) preceding diversity (small differences) is not only found in the Cambrian Explosion but is found after it as well. In fact it is a defining characteristic of the overall fossil record.

    Scientific study turns understanding about evolution on its head – July 30, 2013
    Excerpt: evolutionary biologists,,, looked at nearly one hundred fossil groups to test the notion that it takes groups of animals many millions of years to reach their maximum diversity of form.
    Contrary to popular belief, not all animal groups continued to evolve fundamentally new morphologies through time. The majority actually achieved their greatest diversity of form (disparity) relatively early in their histories.
    ,,,Dr Matthew Wills said: “This pattern, known as ‘early high disparity’, turns the traditional V-shaped cone model of evolution on its head. What is equally surprising in our findings is that groups of animals are likely to show early-high disparity regardless of when they originated over the last half a billion years. This isn’t a phenomenon particularly associated with the first radiation of animals (in the Cambrian Explosion), or periods in the immediate wake of mass extinctions.”,,,
    Author Martin Hughes, continued: “Our work implies that there must be constraints on the range of forms within animal groups, and that these limits are often hit relatively early on.
    Co-author Dr Sylvain Gerber, added: “A key question now is what prevents groups from generating fundamentally new forms later on in their evolution.,,,
    http://phys.org/news/2013-07-s.....ution.html

    “It is a feature of the known fossil record that most taxa appear abruptly. They are not, as a rule, led up to by a sequence of almost imperceptibly changing forerunners such as Darwin believed should be usual in evolution…This phenomenon becomes more universal and more intense as the hierarchy of categories is ascended. Gaps among known species are sporadic and often small. Gaps among known orders, classes and phyla are systematic and almost always large.”
    G.G.Simpson – one of the most influential American Paleontologist of the 20th century

    “Given the fact of evolution, one would expect the fossils to document a gradual steady change from ancestral forms to the descendants. But this is not what the paleontologist finds. Instead, he or she finds gaps in just about every phyletic series.” –
    Ernst Mayr-Professor Emeritus, Museum of Comparative Zoology at Harvard University

    “What is missing are the many intermediate forms hypothesized by Darwin, and the continual divergence of major lineages into the morphospace between distinct adaptive types.”
    Robert L Carroll (born 1938) – vertebrate paleontologist who specialises in Paleozoic and Mesozoic amphibians

    “In virtually all cases a new taxon appears for the first time in the fossil record with most definitive features already present, and practically no known stem-group forms.”
    Fossils and Evolution, TS Kemp – Curator of Zoological Collections, Oxford University, Oxford Uni Press, p246, 1999

    What Darwin predicted should be familiar to everyone and is easily represented in the following ‘tree’ graph.,,,

    The Theory – Diversity precedes Disparity – graph
    http://www.veritas-ucsb.org/JOURNEY/IMAGES/F.gif

    But that ‘tree pattern’ that Darwin predicted is not what is found in the fossil record. The fossil record reveals that disparity (the greatest differences) precedes diversity (the smaller differences), which is the exact opposite pattern for what Darwin’s theory predicted.

    The Actual Fossil Evidence- Disparity precedes Diversity – graph
    http://www.veritas-ucsb.org/JOURNEY/IMAGES/G.gif

  412. Zachriel:

    By the same process of diversification from a common ancestor. A class is just a successful lineage that has diversified over a long period of time.

    However, per Darwin, the characteristics that are used to classify the “class” would have developed over long stretches of time, instead of being there from the beginning. Mammals evolved, but the fundamental characteristics of what a mammal is appeared suddenly. This contradicts his taxonomic relativism.

    Yes, outwardly they change in many different ways, but it’s always the same body-plan.

    That’s how I see it. And I think that’s how Meyer’s sees it. I’m rather comfortable with his latest book.

  413. wd400:

    Phyla (and other lineages) arise by speciation followed by divergence. That’s what Darwin was saying, that’s what modern evolutionary biology has shown.

    But that’s not what we see in the Cambrian Explosion. We see almost all of major phyla/body-plans arise quickly (geologically speaking) and THEN diversify.

    We talk about fish and birds and reptiles and dinosaurs and mammals, but we’re really talking about “vertebrates.” The body plan was there from the beginning.

  414. PaV,

    My mocking style?

    Motes and beams, mate.

    “Go away little girl.”

    Thank you for confirming your total logic fail. Those two statements did not represent any misunderstanding of Dembski’s method. I never suggested that they were a refutation thereof. And they remain true.
    But you said “From this statement, I would conclude”
    I did, in a passage that you did not quote, allude to the difficulty in specifying a pattern “just ask a statistician” I quipped.

    Thank you too for confirming that you are unwilling to actually discuss the thesis of his book, that NFL theory can be applied to “evolutionary search”.

    I note that Kairosfocus still hasn’t calculated p(T|H).

  415. PaV,

    Yes, Dembski has since been criticized because in many situations the actual mechanism, and the probability distribution associated with it, can be hard to know. This is a weakness.

    It’s a fatal weakness, because no one can calculate P(T|H) for a biological phenomenon. Look how kairosfocus is squirming to avoid the question. He knows he can’t do the calculation, but he is ashamed to admit it.

    What’s even worse is that even if Dembski (or KF) could calculate P(T|H), that wouldn’t make CSI a useful concept.

    Here’s why: You have to know that P(T|H) is low in order to attribute CSI to it. But if you already know that P(T|H) is low, then you don’t need the CSI concept at all, because you’ve already determined that the phenomenon in question could not have evolved.

    It’s circular:

    1. Determine that something could not have evolved.
    2. Assign CSI to it.
    3. Conclude that it could not have evolved because it has CSI.

    It’s amazing to me that ID proponents don’t see the problem. At least Dembski was smart enough to dump CSI and move on to his “search for a search” stuff with Marks.

  416. “Well keith, Kf thinks he is calculating a p(T|H) using Durston’s fits data”

    Can I also point out that Durston’s “fits” aren’t exactly a well-accepted parameter in biochemistry.

    I think 5 or so papers site that work. Split between self-citation and one other group. Not exactly taking science by storm.

    And defining the “fits” based on conservation of sequences (selected by a specification) in living organisms and then calling that the target space for improved fitness in evolution is….problematic.

    It suffers all the issues I’ve outlined above.

    Not to say these approaches in quantifying bits/amino acid aren’t useful. We make consensus proteins from the most highly conserved amino acids in a given domain. Nice, stable scaffolds result.

  417. 418

    Zac said,

    You mean the encoding is secret or something? So, do you think Shakespeare could do it?

    I say,

    The first string is just a numerical representation of the sonnet.

    Yes, Shakespeare did do it that much is assumed.

    peace

  418. PaV: However, per Darwin, the characteristics that are used to classify the “class” would have developed over long stretches of time, instead of being there from the beginning.

    Yes, changes accumulate in each lineage.

    PaV: Mammals evolved, but the fundamental characteristics of what a mammal is appeared suddenly.

    Not necessarily. For instance, mammaries don’t fossilize, but even simple secretions can help nourish and protect the young, then evolve over time due to reinforcing selection. If we look at the middle ear, another mammalian characteristic, there are some excellent fossils showing the transition.

    PaV: Yes, outwardly they change in many different ways, but it’s always the same body-plan.

    Sure. Humans are just modified deuterostomes, a tube with appendages to stuff food into one end. Microevolution!

    PaV: We see almost all of major phyla/body-plans arise quickly (geologically speaking) and THEN diversify.

    Sure. Humans are modified deuterostomes. Nothing much has changed since the Cambrian.

    (It’s called adaptive radiation.)

    fifthmonarchyman: The first string is just a numerical representation of the sonnet.

    So? What does that do? The algorithm (or Shakespeare) has a dictionary, knowledge of grammar, scansion, poetic structure, the relationship of words, the catalog of poetry. The question is whether the algorithm can create a sonnet. What does it profit to change it to numbers? It would certainly confuse Shakespeare.

  419. 420

    zac said

    What does it profit to change it to numbers?

    I say,

    It removes the string from it’s context. For all the programer knows it’s a representation of a protein or of fluctuation in the temperature of a heat source

    You say,

    It would certainly confuse Shakespeare.

    I say,

    That is the point when you separate the string from the context

    PS

    FYI I feel my frustration level increasing time to take a break

    peace

  420. a few notes as to the ‘top down’ perspective:

    The Cambrian’s Many Forms
    Excerpt: “It appears that organisms displayed “rampant” within-species variation “in the ‘warm afterglow’ of the Cambrian explosion,” Hughes said, but not later. “No one has shown this convincingly before, and that’s why this is so important.””From an evolutionary perspective, the more variable a species is, the more raw material natural selection has to operate on,”….(Yet Surprisingly)….”There’s hardly any variation in the post-Cambrian,” he said. “Even the presence or absence or the kind of ornamentation on the head shield varies within these Cambrian trilobites and doesn’t vary in the post-Cambrian trilobites.” University of Chicago paleontologist Mark Webster; article on the “surprising and unexplained” loss of variation and diversity for trilobites over the 270 million year time span that trilobites were found in the fossil record, prior to their total extinction from the fossil record about 250 million years ago.
    http://www.terradaily.com/repo.....s_999.html

    Dollo’s law and the death and resurrection of genes:
    Excerpt: “As the history of animal life was traced in the fossil record during the 19th century, it was observed that once an anatomical feature was lost in the course of evolution it never staged a return. This observation became canonized as Dollo’s law, after its propounder, and is taken as a general statement that evolution is irreversible.”
    http://www.pnas.org/content/91.....l.pdf+html

    A general rule of thumb for the ‘Deterioration/Genetic Entropy’ of Dollo’s Law as it applies to the fossil record is found here:

    Dollo’s law and the death and resurrection of genes
    ABSTRACT: Dollo’s law, the concept that evolution is not substantively reversible, implies that the degradation of genetic information is sufficiently fast that genes or developmental pathways released from selective pressure will rapidly become nonfunctional. Using empirical data to assess the rate of loss of coding information in genes for proteins with varying degrees of tolerance to mutational change, we show that, in fact, there is a significant probability over evolutionary time scales of 0.5-6 million years for successful reactivation of silenced genes or “lost” developmental programs. Conversely, the reactivation of long (>10 million years)-unexpressed genes and dormant developmental pathways is not possible unless function is maintained by other selective constraints;
    http://www.pnas.org/content/91.....l.pdf+html

    Dollo’s Law was further verified to the molecular level here:

    Dollo’s law, the symmetry of time, and the edge of evolution – Michael Behe
    Excerpt: We predict that future investigations, like ours, will support a molecular version of Dollo’s law:,,, Dr. Behe comments on the finding of the study, “The old, organismal, time-asymmetric Dollo’s law supposedly blocked off just the past to Darwinian processes, for arbitrary reasons. A Dollo’s law in the molecular sense of Bridgham et al (2009), however, is time-symmetric. A time-symmetric law will substantially block both the past and the future.
    http://www.evolutionnews.org/2.....f_tim.html

    Evolutionary Adaptations Can Be Reversed, but Rarely – May 2011
    Excerpt: They found that a very small percentage of evolutionary adaptations in a drug-resistance gene can be reversed, but only if the adaptations involve fewer than four discrete genetic mutations. (If reverting to a previous function, which is advantageous, is so constrained, what does this say about gaining a completely novel function, which may be advantageous, which requires many more mutations?)
    http://www.sciencedaily.com/re.....162538.htm

    From Thornton’s Lab, More Strong Experimental Support for a Limit to Darwinian Evolution – Michael Behe – June 23, 2014
    Excerpt: In prior comments on Thornton’s work I proposed something I dubbed a “Time-Symmetric Dollo’s Law” (TSDL).3, 8 Briefly that means, because natural selection hones a protein to its present job (not to some putative future or past function), it will be very difficult to change a protein’s current function to another one by random mutation plus natural selection.
    But there was an unexamined factor that might have complicated Thornton’s work and called the TSDL into question. What if there were a great many potential neutral mutations that could have led to the second protein? The modern protein that occurs in land vertebrates has very particular neutral changes that allowed it to acquire its present function, but perhaps that was an historical accident. Perhaps any of a large number of evolutionary alterations could have done the same job, and the particular changes that occurred historically weren’t all that special.
    That’s the question Thornton’s group examined in their current paper. Using clever experimental techniques they tested thousands of possible alternative mutations. The bottom line is that none of them could take the place of the actual, historical, neutral mutations. The paper’s conclusion is that, of the very large number of paths that random evolution could have taken, at best only extremely rare ones could lead to the functional modern protein.
    http://www.evolutionnews.org/2.....87061.html

    Some Further Research On Dollo’s Law – Wolf-Ekkehard Lonnig – November 2010
    http://www.globalsciencebooks......)1-21o.pdf

    A. L. Hughes’s New Non-Darwinian Mechanism of Adaption Was Discovered and Published in Detail by an ID Geneticist 25 Years Ago – Wolf-Ekkehard Lönnig – December 2011
    Excerpt: The original species had a greater genetic potential to adapt to all possible environments. In the course of time this broad capacity for adaptation has been steadily reduced in the respective habitats by the accumulation of slightly deleterious alleles (as well as total losses of genetic functions redundant for a habitat), with the exception, of course, of that part which was necessary for coping with a species’ particular environment….By mutative reduction of the genetic potential, modifications became “heritable”. — As strange as it may at first sound, however, this has nothing to do with the inheritance of acquired characteristics. For the characteristics were not acquired evolutionarily, but existed from the very beginning due to the greater adaptability. In many species only the genetic functions necessary for coping with the corresponding environment have been preserved from this adaptability potential. The “remainder” has been lost by mutations (accumulation of slightly disadvantageous alleles) — in the formation of secondary species.
    http://www.evolutionnews.org/2.....53881.html

    Verse:

    Genesis 1:25
    God made the wild animals according to their kinds, the livestock according to their kinds, and all the creatures that move along the ground according to their kinds. And God saw that it was good.

  421. D_J:

    PaV wrote:

    BTW, I didn’t say that my statements “refuted” what you had written, did I!
    Well, DNA_J, did I say that?

    I was merely pointing out that that is NOT how Dembski’s method works. What I said was that it was clear that you hadn’t read Dembski’s book. And you have made it clear that I was right in reaching that conclusion.

    I then WENT ON to tell you the two things that are needed for Dembski’s method to apply.

    DNA_Jock responds: Thank you for confirming your total logic fail. Those two statements did not represent any misunderstanding of Dembski’s method. I never suggested that they were a refutation thereof.

    Wow. Have you flipped out?

    Again, did I say that what you wrote was a “misunderstanding” of what Dembski has written, or his thought on the subject?

    Of course not.

    It was quite evident that you were unfamiliar with his writings or you would have phrased things differently. Then I “supplied” you with some of the critical elements of his method.

    I fully expected that you would react the way that you did. Why?

    Because, believe it or not, you’re not the first statistician whose appeared at UD. And we know where the weaknesses of Dembski’s method lies. But, again, they are weaknesses, and not, what, in any other sector of science, would be invalidating.

    These “weaknesses” are why such a thing as dFSCI is being discussed here.

    And they remain true.
    But you said “From this statement, I would conclude”

    Tell me, was my conclusion wrong? Yes, or no.

  422. BA77:

    Thanks for those great quotes.

    I hope for gpuccio’s sake we can get back on topic. We/he should be discussing dFCSI

  423. But that’s not what we see in the Cambrian Explosion. We see almost all of major phyla/body-plans arise quickly (geologically speaking) and THEN diversify.

    We talk about fish and birds and reptiles and dinosaurs and mammals, but we’re really talking about “vertebrates.” The body plan was there from the beginning.

    It would be a special lineage that diversified before it existed. There are also plenty of non-vertebrate chordates that share much of our body plan but aren’t verebrates, and indeed non chordate deuterostomes that share our very early embryology, so I’m not sure which body plan was there at the start.

  424. Still waiting for the devestating revelation about the fundamental theorem of NS, too…

  425. Zachriel:

    I’m so happy we have such smart people like you here. It really helps.

    This article at ENV tells us:

    A new article in The Scientist, “Clocks Versus Rocks,” reports a contradiction between the fossil record and the molecular data as regards the origin of placental mammals. The problem is that, as a fossil-based study led by Maureen O’Leary found last year, “placental mammal diversity exploded” starting around 65 million years ago, but as The Scientist now puts it, “Genetic studies that compare the DNA of living placentals suggest that our last common ancestor lived between 88 million and 117 million years ago, when the dinosaurs still ruled.” So we have a conflict: fossils show the abrupt explosion of many modern mammal groups starting around 65 million years ago. However, living members of those groups are so genetically different that “molecular clock” studies suggest their origins must be deep into the Mesozoic, during the age of the dinosaurs. Which dataset are we to trust?

    It is stupendous that you would talk about “deuterostomes” when I was specifically talking about vertebrates.

    I suppose it is your thought that the Cambrian vertebrate species arose originally from these “deuterostomes.” But, of course, evidence, and not conjecture, are needed.

  426. wd400:

    I’m waiting for DNA_Jock to answer me first.

    And, BTW, this is from BA77’s link to “terradaily”:
    “The paper is relevant to the big question of what fueled the Cambrian radiation, and why that event was so singular,” said UC-Riverside’s Hughes of Webster’s study. It appears that organisms displayed “rampant” within-species variation “in the ‘warm afterglow’ of the Cambrian explosion,” Hughes said, but not later. “No one has shown this convincingly before, and that’s why this is so important.”

    The variation was there from the beginning. Evolution didn’t put the variation there first. (The quote refers to a paper that appeared in Science magazine about 7 years ago) The biggest weight hanging from the neck of Darwinism, not surprisingly, is the Fossil Record. Darwin knew from the beginning that the Fossil Record did not favor his theory. We now know this to be even more true.

  427. PaV:

    “Tell me, was my conclusion wrong? Yes, or no.”

    I already told you. It was fallacious.

    Logic not your strong point.

    Your position appears to be that I was a statistician familiar with the weaknesses of Dembski’s method, but thanks to the phrasing I used, you knew I was not familiar with his writings.

    For some strange reason, you thought I was criticizing Dembski specifically. Hence the logic fail.

    “Because, believe it or not, you’re not the first statistician whose appeared at UD”.

    Nor the last. Nor even a statistician.

    What are they teaching kids at UCLA these days?

  428. kairosfocus, your blithering aimed at “R” is apparently aimed at me but my username is not “R”, it’s Reality. Of course the usual malicious, hypocritical, mendacious, falsely accusatory barf you spewed is aimed at me and anyone else who doesn’t kiss your butt.

    You said: “There is such a thing as fair and reasonably justified comment…”

    Yup, and all of my comments about you and to you are not only fair and THOROUGHLY justified, they’re also CORRECT.

  429. PaV

    So, please, if you can, explain to us how evolution takes place. Give us the steps, show us examples. And, of course, we’ll be very interested in all the “intermediate forms” that Darwinism supposes.

    You say you graduated from UCLA with a Biology degree yet you managed to not learn even the most basic things about evolutionary theory. Now you want Keith S. to give you a remedial course on evolution in a few paragraphs for all the things you couldn’t grasp in four years.

    Interesting.

  430. 431

    Realty

    lots of adjectives check
    assorted words in all caps check
    reference to body orifice check
    reference to bodily fluids check

    looks like all the bases are covered

    peace

  431. keith s:

    The FSCO/I equation requires the calculation of P(T|H), where H includes “Darwinian and other material mechanisms”, per Dembski.

    And you cannot provide H and you blame us for your failures. Got it.

  432. How many iterations of an algorithm does it take to find (with proper fitness functions, of course)…

    METHINKS KEITH IS AN IDIOT

  433. Joe
    And you cannot provide H and you blame us for your failures. Got it

    You guys are the ones pushing a calculation that absolutely requires H, not us. Unlike ID, evolutionary theory relies on its own positive evidence and not some bogus probability value.

  434. Vertebrates are deuterostomes Pav, and chordates and craniates for that matter. You are talking as if the “vertebrate body plan” was a thing unto itself, so Zachriel is right to point out it is in fact nested within biological diversity. There is no more doubt that mammals descend from eariler vertebrates that all vertebrates descent from chordates or deuterostomes.

  435. Adapa, there isn’t any evolutionary theory and there isn’t any evidence that natural selection can do anything beyond changing allele frequency. And it isn’t the only mechanism for doing that.

    If you had some positive evidence then the calculations would be moot. The reason you need to provide H is because you don’t have the positive evidence.

  436. wd400:

    There is no more doubt that mammals descend from eariler vertebrates that all vertebrates descent from chordates or deuterostomes.

    And there is a lot of doubt for both as there aren’t any known mechanisms capable of producing the transformations required.

  437. Whoops! There goes “argument clinic” Joe again!

    “there is no evolutionary theory!!”

    “there is no evidence for evolution!!”

    “there are no known evolutionary mechanisms!!”

    Keep telling yourself that Joe. Don’t mind the laughter from the life sciences professionals. 🙂

  438. Vishnu @433

    How many iterations of an algorithm does it take to find (with proper fitness functions, of course)
    METHINKS KEITH IS AN IDIOT

    Length of words in English Language follows Binomial Distribution with n=38 and p=0.220887

    So,the Probability of "Vishnu"(x>=6) = 0.8740
    The Probability of “Keith” (x less than or equal to 5) = 0.1259

    Hence “METHINKS VISHNU IS AN IDIOT” is more appropriate
    Note: My handle has an underscore so you can’t use the above calculations for my handle 🙂

  439. In case you didn’t know: I have a text file showing the first 15 minutes of learning for a programmed rudimentary intelligence, the ID Lab critter.

    https://sites.google.com/site/intelligenceprograms/Home/Run2LobeFor15Min.Txt

    The contents of its memory at each thought cycle (left then right lobe has control) can be reconstructed by saving the Data listed at each line in an array, using the Address as the element number to save the data at.

    I’m not sure whether this will be useful to you or not, but that’s what the numbers indicative of intelligence look like. What matters in regards to purpose and meaning is in the way the motors are being controlled. This brings us back to Movement Is Happiness even when we just see and/or hear the right moves. In cells molecules like motor proteins do the muscle type work, while sensory molecules Address the memory that stores actions in Data elements called “genes”. The systematics are the same as for our brain. The only difference is that the intelligence controls molecular motor systems inside cells, instead of muscles that power our limbs.

    Intelligence might not look like much when reduced down to numbers for motor/motion control, but that’s how an intelligence works. I likewise had to get used to seeing the numbers temporally. What happens in one thought cycle depends on what happened in previous cycles before it and what will somewhat predictably happen after that. Each thought cycle is usually only one step in a learned task (such as navigating to the location its attracted to) that can take many thought cycles to complete. Complex behavior is the result of proper timing of actions from a relatively simple motor control system. After adding consciousness and other intelligence levels we contain we are found to be much more than a robot but thankfully ID theory only needs to explain the basics of the “intelligent” part, not the part that causes us to be “conscious”.

  440. KF:

    “GP, I think the problem here is that on evolutionary materialism contemplation must reduce to computation, but the deterministic mechanistic side of algors is not creative and the stochastic side is not creative enough and powerful enough to account for FSCO/I and particularly spectacular cases of dFSCI such as the sonnets in question. KF”

    This is a very good and concise summary of the important point here. Thank you! 🙂

  441. Zachriel:

    “That’s your claim, and you may be correct; but you argue that an algorithm can’t generate a sonnet, but restrict the algorithm from having access to the same background information as Shakespeare.”

    Not exactly. I argue that an algorithm cannot generate an original sonnet with an original meaning on anything, also if the algorithm has access to some corpus of information (let’s say some encyclopedia). I am not requiring that the sonnet should be as good as Shakespeare’s (OK that would be really exacting!), or even some deep and beautiful piece of poetry. Indeed, in my general argument, I did not even require that it be a sonnet, or poetry at all: just that it could have original good meaning in English and be 600 characters long.

    So, my requests, and my indications of what an algorithm can do as far as we know today, are really limited.

  442. Zachriel at #373:

    “We can show that such a process can find solutions to complex problems.”

    That’s true. But I would add: complex problems which have already been defined, directly or indirectly, but the programmer, and can be solved computationally by the computational powers of the algorithm and computing machine.

  443. Adapa at 374:

    “You guys look at one result after the fact then confuse it with a before the fact prediction and claim “ZOMG that result is too improbable it must be designed!!” You could make the same erroneous claim with anyone who won.”

    The old wrong silly argument.

    Try to compare these two statements:

    a) A lottery which sold 10^150 tickets was won by one of the people who acquired a ticket.

    Post-specification: “one of the people who acquired the ticket”. Probability of the event (as judged by the post-specification): 1.

    b) A lottery which sold 10^150 tickets was won by the brother of the functionary who presided over the extraction to check its regularity.

    Post-specification: “the brother of the functionary who presided over the extraction to check its regularity”. Probability of the event (as judged by the post-specification): 1:10^150 (if there is only one brother 🙂 ).

    Conclusions: I leave them to you (or the judge!).

    This is an example of design detection (and the detected design is a fraud).

    I would appreciate a clear and explicit answer to this from you. Thank you.

  444. DNA_Jock:

    I would very much appreciate a comment from you to my example in post #444, regarding post-specification.

  445. fifthmonarchyman at #382:

    Very interesting. Again, give us details and keep us updated. 🙂

  446. EugeneS at #390 and Phineas at #397:

    🙂

    This is an important point. Thank you for your contribution!

  447. PaV at #423:

    “I hope for gpuccio’s sake we can get back on topic. We/he should be discussing dFCSI”

    I hope that too! And thank you, always, for your contributions. You know how I appreciate them!

  448. fifthmonarchyman at #431:

    Is that some form of design detection? 🙂

  449. Vishnu at #433:

    “How many iterations of an algorithm does it take to find (with proper fitness functions, of course)…

    METHINKS KEITH IS AN IDIOT”

    You can just state it as a Turing Oracle! 🙂

  450. Me_Think at #439:

    “The Probability of “Keith” (x less than or equal to 5) = 0.1259”

    So, we were just a little bit unlucky! 🙂

  451. Gary S. Gaulin:

    I was not aware of your work. It seems interesting, but I certainly need time to study it. Thank you for sharing!

  452. Guys:

    What a catch up! I will have to stop sleeping. 🙂

    Luckily, most comments were not addressed to me, so I could skip many of them.

    Thank you to all, friends and not, for the comments. Please. go on! 🙂

    I would humbly sponsor my brief post #444, and encourage comments on it.

  453. KS:

    Nope.

    The FSCO/I quantitative metric model — derived algebraically & conceptually from Dembski’s 2005 metric by log reduction, recognition of a reasonable threshold and provision of a means of recognising observed functional specificity of organisation — is in the form:

    Chi_500 = I*S – 500, functionally specific bits beyond the sol system threshold

    S is a dummy variable reflecting warrant for functional specificity, and the 500 bit threshold reflects the sol system blind search of config space limit. I is an info metric that is based on the various empirical info measurement techniques out there.

    Those techniques do not necessarily rely on a priori estimates of probabilities on the hyp of any and all possible states of the world that may affect probability distributions. After all, it is a commonplace to inspect the physical circumstances and se if there is reason to infer bias, or whether there is no reason to prefer any one particular outcome. E.g. with a coin or die, the physical arrangements are such that there is high contingency and there are defined outcome states. The objects are symmetrical and do not bear obvious signs of manipulation leading to the usual conclusion that a coin can store 1 bit, and a 6-sided die, 2.585 bits of info. Chains of same, of length m and n would be able to hold m * 1 and n * 2.585 bits of info. D/RNA has four states and no basic constraint on chaining, so it will be able to store 2 bits per base. Such has actually been used in coding, to express an ownership by Venter IIRC.

    Likewise, statistical studies are a commonplace way to explore patterns of informational systems, e.g. the frequency distribution of letters in typical English text. This points to the further phenomena of real world coding systems, that there tend to be redundancies etc. In the case of proteins, it can be seen that some AAs are more flexible than others in a chain; which makes sense on the point that some may be part of an active site cleft, but others may just be part of the folding and within reason another hydrophobic or hydrophilic AA might do.

    In any case, on statistical studies, we may infer empirically warranted frequencies, and thus coding or functionality constrained variations from the physically possible distribution of states. That statistically estimates probabilities per the functional state of say a protein. But then, that is no news to anyone who has had a modicum of statistical exposure in school math and has plotted a bell, reverse j or the like distribution.

    The Shannon H metric applies and is in the form of an average info per symbol metric linked to a weighted sum probability calculation: H = – SUM pi log pi. The same familiar from statistical thermodynamics and which can there be interpreted on average missing info to specify microstate on knowing the relevant macrostate variable values.

    So, from the info end, and from physical and statistical studies we may deduce probabilities etc.

    All of that is commonplace, well known, and uncontroversial, so I am amazed to see such a scorched earth fight against that.

    The point is, take the algebraic analysis and move the expressions of interest into info form, seeing that we are dealing with an info beyond a threshold metric. Then, on context of empirical situation, come up with reasonable values for threshold and reasonable ways to measure info that elates to functionally specific cases.

    The least familiar aspect is use of a dummy variable to define a state of the world that affects the case, but that is a commonplace in economic modelling. In this case, default is 0, and it moves to 1 on evidence that the configurations in view are functionally specific. Which is in relevant cases not too hard to spot, e.g. fairly small perturbations destroy function. Assembly on a wiring diagram that is linked to interactions to achieve function is an excellent case in point. Believe you me, you do not want to inject random variations in the wiring of an electronic circuit. Poof, you let the smoke out.

    The 6500 C3 reel I have been using in recent days is not notably tolerant of perturbation of components or misalignments or improper orientation.

    English text can tolerate some typos and grammatical variants, but real soon thinhz fshh srpgpd. [things fall apart]

    Computing codes, especially object code aligned to the architecture of a system, are notoriously intolerant of bugs. Indeed, IIRC NASA once lost a rocket because of a misplaced comma in some code.

    So, this is not strange, suspect stuff, it is well known.

    And — per fair comment — would be uncontroversial, apart from ideologisation of origins science and associated rather selective hyperskepticism.

    Please reconsider.

    KF

  454. F/N: Let’s remind ourselves of Plato’s longstanding warning:

    ______________

    >> Ath. . . .[The avant garde philosophers and poets, c. 360 BC] say that fire and water, and earth and air [i.e the classical “material” elements of the cosmos], all exist by nature and chance, and none of them by art . . . [such that] all that is in the heaven, as well as animals and all plants, and all the seasons come from these elements, not by the action of mind, as they say, or of any God, or from art, but as I was saying, by nature and chance only [ –> that is, evolutionary materialism is ancient and would trace all things to blind chance and mechanical necessity] . . . .

    [Thus, they hold] that the principles of justice have no existence at all in nature, but that mankind are always disputing about them and altering them; and that the alterations which are made by art and by law have no basis in nature, but are of authority for the moment and at the time at which they are made.- [ –> Relativism, too, is not new; complete with its radical amorality rooted in a worldview that has no foundational IS that can ground OUGHT.] These, my friends, are the sayings of wise men, poets and prose writers, which find a way into the minds of youth. They are told by them that the highest right is might [ –> Evolutionary materialism — having no IS that can properly ground OUGHT — leads to the promotion of amorality on which the only basis for “OUGHT” is seen to be might (and manipulation: might in “spin”)], and in this way the young fall into impieties, under the idea that the Gods are not such as the law bids them imagine; and hence arise factions [ –> Evolutionary materialism-motivated amorality “naturally” leads to continual contentions and power struggles influenced by that amorality], these philosophers inviting them to lead a true life according to nature, that is,to live in real dominion over others [ –> such amoral factions, if they gain power, “naturally” tend towards ruthless abuse], and not in legal subjection to them. >>
    ______________

    Oh, dat Bible-Thumpin, Creationist Theocrat! (Not.)

    Seems, sadly apt 2350 years later, on persistent attempts to sidetrack via turnspeech and personalities in the teeth of evidence including relevant history. KF

  455. b) A lottery which sold 10^150 tickets was won by the brother of the functionary who presided over the extraction to check its regularity.

    Post-specification: “the brother of the functionary who presided over the extraction to check its regularity”. Probability of the event (as judged by the post-specification): 1:10^150 (if there is only one brother 🙂 ).

    What if the functionary who presided over the extraction had a lot of friends an relatives who also bought tickets? Would the specification change?

    (also, what if the one brother bought 10^149 tickets? :-))

  456. fifthmonarchyman: It removes the string from it’s context.

    But the string was developed from context. Changing it to numbers would serve only to confuse Shakespeare, not the Shakespeare emulator. Here’s your original proposal:

    fifthmonarchyman: I believe there way to separate original CSI in the sonnet from the CSI that comes from background information.

    step one… Remove the sequence from it’s context and represent it as a serious of numeric values.

    step two… see if an algorithm can reproduce the pattern in those values by any means whatsoever sufficiently enough to fool an observer.

    Of course with the understanding that the algorithm can’t reference the original string.

    How and why would you think a Shakespeare emulator would recreate the exact same sequence? Even Shakespeare may not recreate the exact same sequence. A Shakespeare emulator might be enticed to create novel sonnets, though.

    Nor do we see how you have calculated the difference in information. If Shakespeare doesn’t create the exact same sequence, does that mean he has no background knowledge?

  457. Any Shakespeare emulator would trace back to the programmer who wrote it. And it wouldn’t be an algorithm…

  458. Gpuccio @ 444
    (As I go to paste this response, I see Bob O’H beat me to it, but 445 specifically directed this question to me…Good to see that we make the same points independently. What are the chances? 🙂 )

    a) A lottery which sold 10^150 tickets was won by one of the people who acquired a ticket.
    Post-specification: “one of the people who acquired the ticket”. Probability of the event (as judged by the post-specification): 1.
    b) A lottery which sold 10^150 tickets was won by the brother of the functionary who presided over the extraction to check its regularity.
    Post-specification: “the brother of the functionary who presided over the extraction to check its regularity”. Probability of the event (as judged by the post-specification): 1:10^150 (if there is only one brother [smiley] ).
    Conclusions: I leave them to you (or the judge!).
    This is an example of design detection (and the detected design is a fraud).

    Nice example. Two issues with it.

    The first issue is not about the specification, but I will note it, just to be thorough.

    You did not state how many tickets the winner bought. If he bought 10^149 tickets, then the conclusion would be different. This is analogous to the ”equiprobable” assumption, which everyone agrees is incorrect, but IDists assert is “not material”, without ever actually providing numbers to support this assertion. Given the number of engineers here, this is disappointing.

    The second issue relates to the specification, which was, I believe, your point.

    Why did you feel the need to mention that there is only one brother? (N.B. my phrasing here is not a rhetorical flourish. gpuccio recognizes that the number of brothers matters.) Let’s suppose that the functionary has one brother and 18 sisters. One of these sisters has been convicted of fraud, another of bank robbery. He has six sons, one of whom is unemployed, one is a lawyer.

    Someone motivated to see fraud can, post-hoc, write their specification “the unemployed son”, “the grifter sister” in order to minimize the P(the observed result | a fair draw). Particularly problematic if the bank robber or the lawyer bought a LOT of tickets.

    Hence my admonition to be really, really, really careful with post-hoc specifications.

    Lotteries (and marketing “competitions”) seek to mitigate this problem by specifying ahead of time an unambiguous definition of those who are NOT allowed to participate. It doesn’t actually work to prevent fraud, but at least they are trying.

  459. Bob O’H:

    Good questions, but, indeed, not so relevant, as I think you know.

    But let’s discuss it for completeness.

    Let’s say that the functionary has 100 living strict relatives, and, let’s be generous, 900 friends, lovers, whatever. Then the probability becomes 1000 : 10^150, that is 1 : 10^147. IOWs, if the local optimum is larger, but always hugely smaller than the search space, nothing really changes in the inference. Inferring a fraud with a probability of 1:10^150 is not specially different from inferring a fraud with a probability of 1:10^147.

    We could assume an uniform probability distribution for the people who have bought a ticket by specifying that each person could buy only a ticket (I can anticipate an easy objection: don’t worry, it’s a multiverse lottery, we have enough people!).

    But an uniform probability distribution is not really necessary. It’s enough to know that the ticket was expensive enough that nobody has bought more than 10 tickets. So, in the worst case, the probability of a random event of that kind becomes 1:10^146. Safe enough to detect the fraud.

    Of course we must consider all these things, but the simple point is: a very big search space in most cases makes those points irrelevant.

    However, my point was simply that a post-specification, if reasonable and made with good methodology, is perfectly apt to support a design detection.

  460. DNA_Jock:

    I think that in my answer to Bob O’H I have addressed your points too.

    I agree with you that we must be cautious with post-specifications, and use a correct methodology and attention, like in all scientific reasonings. But my point is that a post-specification is perfectly valid, if those cautions have been correctly applied.

    Instead, I had the impression that in some posts you equaled any post-specification to a fallacy which will inevitably bring to any arbitrary overfitting. If that is your position, I don’t agree.

  461. Bob O’H:

    Good questions, but, indeed, not so relevant, as I think you know.

    Sorry, but I think it is very relevant. Where are you going to draw the line? Close relatives? All relatives? Close friends, friends, acquaintances, people he met at a party? People with the same name? People with similar names? People with the same birthday? People with interesting names? People who have won the lottery before?

    The point is that you need to specify every event that might make think something interesting was going on, otherwise you end up looking like a Texan sharp-shooter, with a specification that is too small because you have only focussed on what happened, not what might else have happened.

  462. No gpuccio,
    If you got the impression that I had equated any post-specification to a fallacy, then you were misled by PaV’s strawmanning of my position.

    Bob O’H’s description is bang on:

    The point is that you need to specify every event that might make think something interesting was going on, otherwise you end up looking like a Texan sharp-shooter, with a specification that is too small because you have only focussed on what happened, not what might else have happened.

  463. Bob O’H:

    No. I know that a brother won, so I can well draw the line at brothers.

    Any judge would be fine with that.

    Unless you have any reasons to suspect that a great part of living beings are related to the functionary, it is irrelevant to include cousins or others in the computation.

    So, I agree that you have to be careful and try to understand the search space and its structure: that’s exactly the reason why we discuss about the protein functional space.

    But, with extremely high search spaces, and specifications which have an obvious functional relevance, and which generate a binary partition which makes the target space absolutely unlikely, with all the necessary cautions and methodology, a design inference can be safely made. IOWs, the fact that the specification is a post-specification does not make the reasoning a fallacy. Not at all. It requires, like any other procedure, a correct methodology.

    In brief, just tell: in the example I gave, with the cautions I have specified, wouldn’t you infer a fraud? And if the judge condemns the functionary and his brother for fraud, is he committing the logical fallacy of the Texas Sharpshooter?

    Clear answers, please.

  464. To illustrate Bob O’H’s point, I give you an awesome, irony-meter-destroying example:

    Thanks to PaV’s prompting @376, I went back to No Free Lunch to refresh my memory about the Caputo case, and I was shocked to see that Dembski states that the rejection region, P(T|H) , is 42 x 2^-41, and he is quite explicit that he is using Fisher’s test. “E’s occurence and inclusion within T is, on Fisher’s approach, enough to warrant dismissing the chance hypothesis H.”

    This for the observed result that Caputo placed his party at the top of the ballot on 41 out of 42 occasions.

    Is this correct?
    (Hint: it isn’t)

    The irony here is that he made an error in his specification when he applied Fisher. As Bob put it, he focused on what did happen, and missed what else might have happened.

    Has anyone pointed out his high school math error here?
    Bueller? Bueller?

    [Prediction: people will make an incorrect assumption about what I think the error is, breaking my back-up meter]

  465. gpuccio @ 464,

    The judge may well be indulging in the “Prosecutor’s Fallacy”. It has happened.

    Check out the examples on wikipedia.

  466. ***********************************************************
    ***********************************************************
    ***********************************************************

    Very interesting summary written by gpuccio:

    Indeed, what we see in research about cell differentiation and epigenomics is a growing mass of detailed knowledge (and believe me, it is really huge and daily growing) which seems to explain almost nothing.

    What is really difficult to catch is how all that complexity is controlled. Please note, at this level there is almost no discussion about how the complexity arose: we have really non idea of how it is implemented, and therefore any discussion about its origin is almost impossible.

    Now, there must be information which controls the flux. It is a fact that cellular differentiation happens, that it happens with very good order and in different ways in different species, different tissues, and so on. That cannot happen without a source of information. And yet, the only information that we understand clearly is then protein sequence information. Even the regulation of protein transcription at the level of promoters and enhancers by the transcription factor network is of astounding complexity.

    Please, look at this paper:

    Uncovering Enhancer Functions Using the ?-Globin Locus.

    http://www.ncbi.nlm.nih.gov/pm.....004668.pdf

    In particular Fig. 2.

    And this is only to regulate the synthesis of alpha globin in red cells, a very straightforward differentiation task.

    So, I see that, say, 15 TFs are implied in regulating the synthesis of one protein, I want to know why, and what controls the 15 TFs, and what information guides that control. My general idea is that, unless we find some completely new model, information that guides a complex process, like differentiation, in a reliable, repetitive way must be written, in some way, somewhere.

    That’s what I want to know: where that information is written, how it is written, how does it work, and, last but not least, how did it originate?

    — gpuccio

    ***********************************************************
    ***********************************************************
    ***********************************************************

  467. gpuccio

    In brief, just tell: in the example I gave, with the cautions I have specified, wouldn’t you infer a fraud?

    From just the evidence presented the answer is no, you should not infer fraud. From a mathematical standpoint merely having the winner’s brother be involved in the lottery doesn’t improve the winner’s chance of winning. Unless you can show some actual duplicity – the judge being seen manipulating the results or a recorded conversation of them discussing a plan to cheat – all you have is your personal incredulity. Your logic is atrocious.

    And if the judge condemns the functionary and his brother for fraud, is he committing the logical fallacy of the Texas Sharpshooter?

    With the lack of evidence yes, he would. He’d be making the same mistake “he seems guilty to me so he must be guilty” as you do with your “this looks designed to me so it must be designed”. You assume your conclusion is correct unless it is disproven. Again that logic is just atrocious.

  468. Adapa:

    Thank you for your answer. At least we know what you think.

  469. gpuccio:

    You’re welcome. I bet you can find at least a few UD regulars who are as confused over basic logic as you are.

  470. DNA_Jock:

    I am not familiar with the Caputo case, even if I remember having read of it in Dembski. I have not time now to deal with that, but if you explain your points I will be happy to read what you say.

  471. Ooh-er. I mis-read Dembski’s somewhat convoluted prose.
    I retract the allegation re the Caputo analysis in its entirety.
    This is what happens when you rush things. My apologies to the good doctor.

  472. DNA_Jock:

    Well, this is a blog, and it happens to rush things sometimes. It would be beautiful if we were all more relaxed, and willing to enjoy a respectful confrontation based on our desire for truth.

  473. DNA_Jock:

    For some strange reason, you thought I was criticizing Dembski specifically. Hence the logic fail.

    Your arrogance wears thin.

    You’re completely wrong. Your logic is backwards.

    If I thought you were criticizing Dembski, then why did I open with the comment that “I would conclude you haven’t read Dembski’s NFL book”?

    Why in the world would I think you’re criticizing Dembski when I don’t even think you’ve read him. You weren’t even criticizing ID directly, but indirectly via dFCSI.

    Don’t think you’re the biggest brain in the building. And even if you were, that doesn’t mean you would reach right conclusions.

  474. wd400:

    I know what a deuterostome is since I studied Greek. And, yes, chordates/vertebrates are deuterostomes. But the point that was being made wanted to suggest that mammals should be compared to the most primitive of deuterostomes, thus bypassing the Cambrian Explosion. This is but a debating device, and I’m not going to let this pass. The problem with Darwinism is that it cannot in any way explain how such a great diversity of differing body-plans arose in so short a period of geologic time.

    It does a disservice to science to ignore this “pink elephant in the room.”

  475. Dear Adapa:

    I have nowhere seen any kind of satisfactory description of evolution. I got my degree years ago. I took Chordate Morphology, certainly the class where all the evolutionary “missing links” should show up. But, of course, they didn’t. I was somewhat surprised, but moved on. Only years later, after reading an 1859 edition of Origin of Species did I begin to suspect something was wrong. Why? Because the “intermediates” that Darwin supposed would show up, had not.

    That alone should have been the death-knell of Darwinism. But it’s like a vampire—it needs a stake through the heart or it won’t go away.

    I’ve stated that I read Mayr’s book, What Evolution Is, and found it completely unsatisfactory. There wasn’t any explanation. It always ends up, whether Mayr, or someone else, in ‘hand-waving.’

    I hardly comment here at UD for one simple reason: all the arguments that were needed to be made, were made. And Darwinists insist that they are right, despite all the evidence to the contrary. So, it’s just a matter of time. Every day, almost, some experiment finds something out that “surprises” the experimenters. Why? Because they think in Darwinian terms. I have a term for this: “Another day, another bad day for Darwinism.”

    It’s just a matter of time.

  476. DBA_Jock:

    I was waiting for you to reply, telling me what was the basis of Fisher’s Fundamental Theorem of Natural Selection.

    You haven’t answered.

    Here is the basis: actuarial tables, one for life span, and one for death rate. You’ll notice that there is NO mention of NS. Why? Because that’s all NS does: it changes life spans. hence total progeny, and it causes death.

    I asked this question when you so adamantly said I knew nothing about evolution and how NS works.

    But you see, NS works through killing individuals. You’ve heard of Haldane’s Dilemna, have you not?

    Here is another illustration:

    A bacterial population begins to grow, but it does not have the proper energy source (sugar). The bacteria continues to barely survive and multiply. Eventually one of the bacterial cells has the right kind of mutation, and is now able to metabolize the available energy source, and the bacterial population explodes.

    Now, is your position that NS “helped” the bacteria arrive at this “solution”? Isn’t it quite evident that all that “supposed” NS did was to kill off individuals. Or, phrased differently, the bacterial population limped along, with those not having the proper mutation (metabolism) dying off. This ‘dying off’ continues until a “sufficient” number of bacteria have been reproduced so that, given its mutation rate, the “proper” mutation is arrived at.

    Please point out any errors in my analysis.

    If you can’t, then you might want to reconsider your ideas and issue an apology.

  477. PaV @ 477

    N abd B are adjacent on my keyboard.
    You appear to have me confused with someone else.

    🙂

    I’ll let your interlocutors on that subject (keith s and wd400) respond to your question, but I do have a question of my own:
    Are you interpreting Fisher as referring to the actual rate of change of the mean fitness, or the partial rate of change?
    There’s a follow-up. Beware. 🙂

  478. PaV @ 474
    I will try to explain:

    Reviewing the tape:

    Gpuccio and DNAJ are discussing the challenges of post-hoc specification (#148 – #261). PaV and Collin chime in: Collin (263) with a direct question for DNAJ, PaV (262) with a post addressed to gp, which characterizes DJ’s position and asks him a question. PaV makes some allusions to Dembski’s argument, and draws an analogy to SETI researchers recognizing a “pattern”.

    269 – 274: Collin and DJ have a light-hearted exchange

    270 DJ explains that PaV is overstating DJ’s position, and repeats his point (from #161) about gpuccio’s Texas SharpShooter problem. This conversation is very specific (heh) to gpuccio’s efforts to specify “ATP synthase”.

    #279, 287, 292 PaV and DJ continue to discuss labels for proteins. Nice things are said.

    There follows a lull in the DJ-PaV conversation (which PaV had kindly forewarned at 287), during which gpuccio and DJ discuss Hayashi 2006 and its implications for the shape of the protein landscape; and kairosfocus #312 regurgitates some ancient guff about Weasel and gets slapped around by those posters who are numerate.
    369-375 PaV re-appears, and engages other posters, then at 376 quotes a statement DJ made at 270, their first interaction

    DNA_Jock:

    …you have to be really, really, really cautious if you are applying a post-facto specification to an event that you have already observed, and then trying to calculate how unlikely that specific event was. You can make the probability arbitrarily small by making the specification arbitrarily precise.

    From this statement, I would conclude you haven’t read Dembski’s NFL book. What is needed are two things: (1) recognition of the pattern, and (2) knowledge of the mechanism by which the pattern is formed—IOW, you have to be able to calculate the probability of the “pattern” happening by ‘chance’ given the mechanism utilized in developing the “pattern.”
    [Emphasis added]

    @ #401 I point out what appears to me to be a logic fail: you cannot use the quoted statement to conclude that I have not read NFL. ( at #395 I also make fun of PaV’s statement that mutation rates were uniform; a little worrying from a biology graduate. PaV says this argument was put forward in jest. Okay.)
    This is how I point out the fallacy:

    Here’s the fun thing about logic, PaV. You can arrive at a factually correct conclusion via faulty logic.
    There is no contradiction between what I said, and your paraphrase of Dembski’s point.

    Now, there are a couple of situations in which PaV’s inference might be appropriate. If there was something in NFL that made these statements of mine untenable – note, it would have to make them indisputably untenable — then PaV’s if-then logic would hold.
    Or if PaV believed that I was criticizing specifically Dembski’s work here, and that I had clearly, unambiguously missed Dembski’s point, thereby rendering my statement moot, that could rescue the logical inference.

    I offered PaV each of these escape routes, but he declined them, rather indignantly. His defense was “It was quite evident that you were unfamiliar with his writings or you would have phrased things differently.”
    Soooo, he doesn’t think that I was critiquing Dembski, he doesn’t offer any challenge to the accuracy of what I said; rather it’s the failure to use Dembski-approved phrasing that was his reason for his conclusion. But why on earth must I use Dembski-approving phraseology when discussing proteins with gpuccio? Maybe I think Dembski’s terminology is sub-optimal for my conversation with gpuccio (which I do…)

    So I am sad to say that the logic fail remains.

    I am genuinely disappointed that our conversation headed south. In my defense, #376 was pretty condescending and I had been dealing with kairosfocus recently, so my mocking reflex was already on a hair-trigger…

  479. Pav,

    I know what a deuterostome is since I studied Greek. And, yes, chordates/vertebrates are deuterostomes. But the point that was being made wanted to suggest that mammals should be compared to the most primitive of deuterostomes, thus bypassing the Cambrian Explosion

    You’ve studied quote a few things. But by reason to mention deuterostomes is that it’s a counter to your apparent belief that “the vertebrate body plan” is a thing unto itself. Instead, parts ofthe body plan are shared by echinoderms, and many parts are shared by lancelets and hagfish which are not veretebrates. You can’t talk about vertebrates in isolation without understanding where they fit on the tree of life.

    As per the big reveal…

    Here is the basis: actuarial tables, one for life span, and one for death rate. You’ll notice that there is NO mention of NS. Why? Because that’s all NS does: it changes life spans. hence total progeny, and it causes death.

    This is what you’ve been waiting to reveal?

    Some equations used with actuarial tables form part of Fisher’s derrivation, but the whole thing is hardly “based on actuarial tables” (and, in fact the two tables are the “life table” and another for probability of reproduction).

    I don’t know how the fundamental of thereom of NS could mention “NS”, but it certainly includes is. As (in Fishers’s version) there are two alleles that have different fitnesses. If you are generally interested in the fundamental theoreom you should read about the Price Equation, which is a more general and useful version of the same.

    Here is another illustration…

    Here’s another, nother illustration.

    Imagine your bacterial population, struggling to get by without its sugar source. But instead of being utterly unable to metabolise this sugar, it has an enzyme that can do a bad job at it. Bacterial populations being large, for any given locus there will be a few individuals with a gene duplication. Those individuals with two poorly-functioning enzymes can make twice as much of the crappy enzyme and, relative to their peers, make a killing. As the two-copy lineage comes to dominant the population there are many many more opportunities for mutations that modifiy the orignal enzyme to better metabolism the new sugar to arise, so adaptation will occur much more quickly. Selection has indeed helped this bacterial population deal with this sugar.

    This kind of process, which has been observed many times, is just one example of somehing you don’t seem to have grasped:the cumulative nature of evolution by natural selection is important. Life can find regions of high-fitness because each individuals starts with the benefits of many millions of years of selection.

  480. 481

    Zac said,

    Changing it to numbers would serve only to confuse Shakespeare, not the Shakespeare emulator.

    I say,

    Coding a sonnet in numbers is no different than translating it into a different language. Sure Shakespeare might be confused but he would be equally confused by his plays translated into any language he was unfamiliar with.

    Zac said,

    How and why would you think a Shakespeare emulator would recreate the exact same sequence? Even Shakespeare may not recreate the exact same sequence. A Shakespeare emulator might be enticed to create novel sonnets, though.

    I say,

    Here is why I get frustrated with you. I don’t know if you are being deliberately obtuse or if I failed to explain my self correctly.

    No one is asking to recreate the same sequence. In fact an exact recreation would be strong evidence of cheating.

    All I’m looking for is a string that is sufficiently “Shakespearean” to fool an observer.

    Keep in mind that in the beginning we are looking at specification that is very low on the Y-axis. Just arbitrary structure and grammar. At this level I’d bet you could fool an observer with a monologue from a world wrestling federation star if it was sufficiently long enough.

    You say,

    Nor do we see how you have calculated the difference in information.

    I say,

    The idea is to subtract the CSI that is introduced from the environment algorithmically from the total CSI in the sonnet.

    What we are left with is Original CSI

    However this is all very early days most critics are not even ready to concede that it is impossible to create CSI with an algorithm.

    First things first

    Zac said

    If Shakespeare doesn’t create the exact same sequence, does that mean he has no background knowledge?

    I say,

    I honestly have no Idea what that question means but I assume it has something to do with your misunderstanding of the goals that the algorithm is being asked to achieve. If after the clarification I gave was not enough could you please rephrase

    peace

  481. DNA_Jock:

    Now that the thread is calmer, I would like to take again with you the discussion about post-specification, because I think there are a few things still unsaid, and that are worth the while.

    But I want the discussion to be made in order, so as a first step I need to ask you some explicit commitment, without which the whole discussion would be useless.

    Just to be clear, I will try to categorize the positions which have apparently emerged about the problem in three different groups:

    a) Adapa clearly thinks that all post-specifications are wrong, and that any inference based on a post-specification is a logical fallacy. I am grateful to him for the clarity of his position. At the same time, I am absolutely convinced that he is completely wrong, and that he has no idea of what an inference is. However, he should not be interested in the following discussion, because his position makes it completely irrelevant.

    b) You have said that post-specifications are suspicious, and have (correctly, IMO) invoked special caution when using them. IMO, your position is not as clear as Adapa’s and mine, and that’s why I am requesting a clarification.

    c) I have clearly declared that I believe that post-specifications are perfectly valid, and can be used for perfectly legitimate inferences, provided that they are used with the correct caution and methodology (which we can well discuss in detail).

    Now, while Adapa’s position is clear cut, yours is not. I must ask you if it is the same position as mine (provided we can agree on the cautions and methodology) or if it is just a strategic way to support Adapa’s position. If your answer is the second one, I will respect your choice, but any further discussion on this issue is useless. We just strongly disagree on the very basics.

    To makle things even more clear, I will further refine and detail my example of the lottery, and ask you for a final pronouncement.

    So, a brief summary:

    a) The brother of the functionary who is in charge of controlling the regularity of the extraction wins a lottery which has sold 10^150 tickets.

    b) Let’s say that the brother has bought only one ticket. This can be easily ascertained by the judge during his inquiry, for example asking for the receipt of the tickets he bought.

    c) Let’s say that the functionary has one brother (that too can be easily ascertained). Of course he also has cousins, relatives, lovers and friends in normal quantities.

    d) Now, let’s avoid heavy connotations, and make it easier and more Bayesian: it’s a civil action. We are not discussing death penalty, or prison. Let’s say that the owner of the lottery does not want to pay the prize to the winner. So, the judge has to rule for the winner or for the owner of the lottery.

    e) Always for clarity, let’s say that the functionary could have cheated (he was the only responsibly for the controls), but that there is no direct evidence that he cheated. The only argument of the owner of the lottery is that it is too improbably that his brother had the winning ticket, and therefore he is convinced that the functionary cheated. That’s what the judge has to decide: is the owner’s request not to pay the prize justified, or should the prize be payed to the winner? The judge can simply rule to invalidate the lottery, and the prize will not be payed. So, nobody goes to prison. There is only the interest of A (the winner) against the interest of B (the owner), and an inference to be done about that. Bayesian, isn’t it? Mark, are you happy? 🙂

    Now, I believe that Adapa’s position is clear: the judge must necessarily rule in favor of A (the winner). Any inference derived from the post-specification that the winner is the functionary’s brother, and therefore any inference of a fraud, is completely unwarranted, indeed a mere logical fallacy.

    My position is clear too: I think that the judge has many reasonable motives to seriously consider the question, because the post-specification here is potentially valid for the inference. For the moment, I will not say that he should necessarily rule in favor of B (the owner), because that would mean to discuss the methodology and the cautions, which we will do later, if your answer allows it.

    So, what is your answer? Do you rule for Adapa’s position of for mine? IOWs, to make it less personal, you should decide between the following two alternatives (which, as far as I can see, are logically mutually exclusive):

    a) Any inference based on a post-specification is wrong. Always. This is a logical necessity, because using a post-specification for an empirical inference is a logical fallacy.

    b) Some inferences based on a post-specification are perfectly valid as empirical inferences. Special cautions and accurate methodology are required, because post-specifications are often tricky. But, definitely, an inference based on a post-specification is not necessarily wrong, and is not a logical fallacy.

    So, please answer (if you like). For obvious reasons, if your answer is a, any further discussion is useless. I will respect your position, and I will agree to disagree.

    If your answer is b, I think I have some interesting points about the cautions and the methodology.

  482. c) Let’s say that the functionary has one brother (that too can be easily ascertained). Of course he also has cousins, relatives, lovers and friends in normal quantities.

    What if one of these had won? Would you have inferred fraud too?

  483. Bob O’H:

    Absolutely. With those numbers, we can easily adjust all those “target spaces” easily without any real numeric relevance. Moreover, I would like to comment more in detail about this aspect of “target space subsets” in the methodological discussion, if DNA_Jock, or you, are interested (IOWs, if you agree that post-specification is not a logical fallacy).

    Frankly, I don’t want to engage in a methodological discussion with people who believe that we are discussing a fallacy, because no method can be applied to a fallacy.

  484. fifthmonarchyman: Coding a sonnet in numbers is no different than translating it into a different language.

    Sure, but a rhyme in English may not rhyme in French or computer code.

    fifthmonarchyman: All I’m looking for is a string that is sufficiently “Shakespearean” to fool an observer.

    That you for the clarification. So what is the point of translating it into another language or code? The algorithm presumably needs to work in the same language as Shakespeare in order to create Shakespearean poetry.

    fifthmonarchyman: Keep in mind that in the beginning we are looking at specification that is very low on the Y-axis. Just arbitrary structure and grammar.

    Not to mention scansion and rhymes.

    fifthmonarchyman: The idea is to subtract the CSI that is introduced from the environment algorithmically from the total CSI in the sonnet.

    Great!

    So do it. Take a Shakespearean sonnet; subtract Shakespeare’s knowledge of words, rhyme, grammar, scansion, an extensive library of other works by others, phrases heard on the street, the history of England, tales from Italy, his personal relationships; and let us know what is left over.

  485. 486

    Zac said

    Sure, but a rhyme in English may not rhyme in French or computer code.

    I say,

    Rhyme is at a higher level on the Y-axes.

    I’m sure we can code rhyme numerically but that would only make the string longer

    Zac said.

    So what is the point of translating it into another language or code? The algorithm presumably needs to work in the same language as Shakespeare in order to create Shakespearean poetry.

    I say,

    Once again to isolate the string from it’s context. The language of Shakespeare is part of the background information we are trying to eliminate.

    The Algorithm is free to use that background information if it feels it needs to but it must not draw that conclusion from the original string.

    Zac says,

    Not to mention scansion and rhymes.

    I say,

    Nope that comes later.

    First the algorithm needs to produce enough structure and grammar to fool an observer. Only then will an observer need to move up on the axes.

    You say.

    So do it. Take a Shakespearean sonnet; subtract Shakespeare’s knowledge of words, rhyme, grammar, scansion, an extensive library of other works by others, phrases heard on the street, the history of England, tales from Italy, his personal relationships; and let us know what is left over.

    I say

    Been doing it for a couple of week’s now. stay tuned

    The game is only rudimently encoded on an excel sheet. I’m working on making into a shareable app

    there are 3 players

    1) the designer (in this case Shakespeare)
    2) The programer
    3) The observer

    The programer wins if the observer is fooled. Other wise the designer and observer wins

    Peace

  486. fifthmonarchyman: I’m sure we can code rhyme numerically but that would only make the string longer

    Rhyming and meter don’t make it longer, they just constrain the writer’s choices.

    fifthmonarchyman: Once again to isolate the string from it’s context. The language of Shakespeare is part of the background information we are trying to eliminate.

    So we convert a sonnet to coded numbers. That means it doesn’t rhyme, it doesn’t have a meter, it doesn’t have meaning. Sorry, have no idea what that is all about.

    fifthmonarchyman: The Algorithm is free to use that background information if it feels it needs to but it must not draw that conclusion from the original string.

    You wouldn’t necessarily need to the original sonnet, but you do need knowledge of rhyme and meter, grammar and phrasing.

    fifthmonarchyman: First the algorithm needs to produce enough structure and grammar to fool an observer. Only then will an observer need to move up on the axes.

    It’s easy for an algorithm to create phrases with grammar and poetic meter, even alliteration.

    fifthmonarchyman: Been doing it for a couple of week’s now.

    Well, let us know then.

  487. 488

    Zac says

    Rhyming and meter don’t make it longer, they just constrain the writer’s choices.

    I say,

    A constraint of choice at position X could be expressed numerically perhaps by something like odds verses evens. I suppose you are correct and it would not necessarily make the string longer just more complex.

    ZAc says,

    So we convert a sonnet to coded numbers. That means it doesn’t rhyme, it doesn’t have a meter, it doesn’t have meaning. Sorry, have no idea what that is all about.

    I say,

    Again it all depends on how high on the Y-axes we need to go. You are hung up on level 7 but you haven’t conquered level 1 yet.

    Once your algorithm has fooled the observer at level 1 we can look at coding some rhyme

    You say

    You wouldn’t necessarily need to the original sonnet, but you do need knowledge of rhyme and meter, grammar and phrasing.

    I say,

    Like I said you are perfectly welcome to use any rhyme and meter, grammar and phrasing you want to, Be it Elisabethen English or Klingon as long as you don’t steal it from the original string.

    You say,

    It’s easy for an algorithm to create phrases with grammar and poetic meter, even alliteration.

    I say.

    First it needs to “Know” which grammar or meter is being specified for. And it can’t get that information from the original string

    You say,

    let us know then

    will do

  488. DNA_Jock:

    Soooo, he doesn’t think that I was critiquing Dembski, he doesn’t offer any challenge to the accuracy of what I said; rather it’s the failure to use Dembski-approved phrasing that was his reason for his conclusion. But why on earth must I use Dembski-approving phraseology when discussing proteins with gpuccio? Maybe I think Dembski’s terminology is sub-optimal for my conversation with gpuccio (which I do…)

    So I am sad to say that the logic fail remains.

    You’ve not understood what I wrote, nor why I wrote the what I did.

    You are much better prepared to discuss Dembski’s methodology than most of those who challenge it.

    So, when I wrote that you had not read Dembski, it was more of a statement of fact, than a put-down. I think you reacted to this, however.

    It now makes sense that you’ve read Dembski’s paper on “Specification,” and NOT his NFL book, since the presentations, while containing a lot of the same elements, are made quite differently.

    I know that you’re going round, and round, with gpuccio on “post-specification.” At my point of entry, there were just too many posts to have read to catch up, time being of the essence. It’s hard to go back in time to what exactly I was thinking, but the conclusion I made—which, it turns out, was correct—that you hadn’t read Dembski, more specifically, NFL, was the fact that in the method he offers, he explicitly wants to avoid such a problem via the notion of “tractability.” This, in my mind, renders the point of attack you were making, rather moot.

    What I was actually doing, was trying to get you on track with the real crux of Dembski’s method. Where difficulties arise is when one goes about constructing a “rejection region.” So, e.g., in the Caputo case, we know what happened, and can use an appropriate probability calcualtion. However, in NFL, Dembski presumes a uniform probability distribution when it comes to biological activity. This has been contested. I don’t agree, because I think that the probability space is so vast that what we see in terms of animal life certainly approximates a uniform probability distribution.

    But this is where Dembski’s method can go wrong.

    When you continues to focus on “post-specification,” I knew that you weren’t as familiar with his writings as I HOPED.

    But, again, that you waded through his “Specification” paper is a credit to you. Some, it would appear, don’t even do that much.

    My comments were meant as much for you as they were for gpuccio.

  489. REC and DNA_Jock:

    Just as a followup, I have spent this afternoon in an attempt to analyze an alignment of the 23949 ATP synthase sequences in the search submitted by REC. I obtained the alignment from the site referenced by REC. I am not really an expert in bioinformatics, as you know, so I had some problems about how to analyze the data and in the end I imported them, in some way, in Excel.

    Now, again this is no strict scientific procedure, just what I could manage to do. However, I was very curious to see what emerged.

    I analyzed only the columns where more than 80% of the sequences were represented. IOWs, I omitted the positions where a great number of sequences was not aligned, or presented gaps. I though that could be a reasonable choice.

    So, I could analyze 342 AA positions (out of about 500). The mean conservation about those sequences was 73%. IOWs there was a mean of 73% of the total sequences where the same aminoacid was in that position. I must say that I have computed the percentage excluding the gaps, that however, for what I have said before, were less than 20%% in all the positions that I analyzed.

    I have applied the Durston computation as described in this paper:

    https://intelligentdesignscience.files.wordpress.com/2012/07/a-functional-entropy-model-for-biological-sequences.pdf

    My result for the 342 positions was a functional complexity of 1136 bits.

    This is much higher than what I had grossly estimated with my “shortcut” (absolutely conserved AAs in three distant sequences). Indeed, by that gross method I had estimated about 1600 bits for the alpha + beta chain, but the alpha chain, where only 176 identities were observed, was responsible for a functional complexity of “only” 761 bits.

    So, the Durston method, applied to “only” 342 positions in the molecule, yields a functional complexity which is 375 bits higher than the one I had estimated with my simple shortcut.

    Which is exactly what I expected.

    Now, again I apologize for all the possible imprecisions and errors in this analysis: again, I am not a professional at this, but I am ready to listen to any suggestion or correction. I doubt, however, that things will change much.

    My simple point is: however measured, the conservation and functional specification of the alpha chain of ATP synthase is extremely high.

    So, I definitely maintain all my reasonings about the molecule.

    (We will discuss the problem of the “different” molecule in the discussion about methodology, if it will ever take place).

  490. PaV:

    I could not follow well your debate with DNA_Jock, because of the same time restraints which you mention!

    However, I am sure that it is interesting and stimulating. I have spent the afternoon making computations, but I appreciate your contributions anyway.

    By the way, for what it can mean, I think that DNA_Jock is one of the best interlocutors “from the other side”! 🙂

  491. wd400:

    I don’t know how the fundamental of thereom of NS could mention “NS”, but it certainly includes is. As (in Fishers’s version) there are two alleles that have different fitnesses. If you are generally interested in the fundamental theoreom you should read about the Price Equation, which is a more general and useful version of the same.

    Did you know that Fisher’s equation works quite well in the area of thermodynamics, as well?

    But it is precisely this, if you will, “portability” of the equation that makes you wonder how well it fits biological reality.

    My point in all of this was, of course, that NS simply kills organisms off, nothing more.

    Which now leads us to to your example:

    As the two-copy lineage comes to dominant the population there are many many more opportunities for mutations that modifiy the orignal enzyme to better metabolism the new sugar to arise, so adaptation will occur much more quickly. Selection has indeed helped this bacterial population deal with this sugar.

    That’s how evolutionary biologists choose to look at it. But what has “selection” really done? It simply destroys those bacteria which can’t metabolize. That leaves the others. Why? Because they can metabolize.

    NS doesn’t help the bacteria to “modify” the enzyme in any way. This has to be done strictly through stochastic means. NS helps “fix” an allele, if you will, so that whereas the time to fixation is 4Ne generations for “drift,” it is only 2Ne for “selection”. So NS speeds up “adaptation,” but it cannot help span tremendous differences in a.a. sequences. Mutability alone can do that. (I’m saying nothing here about Shapiro’s NGE.)

  492. fifthmonarchyman:

    What I said to PaV is absolutely valid for you too, and your interesting debate with Zachriel. I cannot follow everything in detail, but I like your approach and I hope that I can learn more from you “as time goes by”.

    Zachriel is one of my “favourite” too. I am really proud that some of the best discussants from the other side are here on this thread, and that such interesting parallel debates are taking place. I am very serious in saying this.

  493. firstmonarchyman: First it needs to “Know” which grammar or meter is being specified for.

    So did Shakespeare. Take away his knowledge of grammar and meter, and he couldn’t write poetry either.

  494. 495

    Zac said,

    So did Shakespeare. Take away his knowledge of grammar and meter, and he couldn’t write poetry either.

    I say

    Agreed!!!!!

    Shakespeare’s knowledge came from a life time of observation and contemplation and therefore could never be produced algorithmically.

    that is the point

    Peace

  495. 496

    In order for a Shakespeare emulator to infallibly fool an observer with out cheating it would need to live a lifetime is Shakespeare’s shoes thinking his thoughts and fighting his demons.

    It is impossible for a algorithm even a very sophisticated one to ever accomplish that.

    peace

  496. fifthmonarchyman: Agreed!!!!!

    So the algorithm would also have to have access to a dictionary, rules of grammar, meter, the history of England, and so on.

  497. fifthmonarchyman and Zachriel:

    Just a short intrusion.

    The real problem is: we have lived for decades with a theory, let’s call it strong AI theory (according to how Penrose uses the term). A theory which very simply assumes that consciousness, one of the main aspects of our reality, can be explained by complex configurations of matter. How many times have we heard that it is only an “emergent property”, whatever that means.

    Now, I am really convinced that strong ID theory is the only scientific theory which is worse that neo darwinism. But that is not what I want to discuss here.

    What I want to discuss here is that the assumption, or the refutation, that consciousness is only an aspect of computation has deep entailment for the other important issue: ID theory.

    Any approach to reality based on strong AI theory, indeed, must face the very simple consequence that all that happens in our consciousness, and that includes cognition, feeling and will, can only be the result of computation more or less mixed to random events which, unless we consider the quantum level, are anyway deterministic too. From that, two important assumptions derive:

    a) Human cognition must be nothing else than a computational process, and therefore must be completely algorithmic.

    b) The traditional intuition of libertarian free will is only a delusion.

    Now, let’s avoid b), or my third favourite interlocutor, Mark, will soon be here too! 🙂

    So, let’s focus on a).

    If Penrose and others are right, and human cognition cannot be explained algorithmically, that is bad news for strong AI theory.

    And what are the consequences for ID? Very simple. If consciousness is not only an aside f objective computations, and if the subjective reaction to conscious representations is an integral part of cognition (which is exactly what I believe), then a designer can do things that no algorithm, however complex, will ever be able to do: IOWs, generating new specifications, new functional definitions, and building original specified complexity linked to them.

  498. Pav,

    Did you know that Fisher’s equation works quite well in the area of thermodynamics, as well?

    No, do have a reference for this? I know Fisher, being the sort of modest bloke he was, compared it to the 2nd law.


    My point in all of this was, of course, that NS simply kills organisms off, nothing more.

    Differentially with regard their genotype.

    It’s how evolutionary biologists choose to look at it. But what has “selection” really done? It simply destroys those bacteria which can’t metabolize. That leaves the others. Why? Because they can metabolize.

    This sounds suspiciously like the “vacuity of fitness” thread Barry recently embarrased himself with…


    NS doesn’t help the bacteria to “modify” the enzyme in any way. This has to be done strictly through stochastic means. NS helps “fix” an allele, if you will, so that whereas the time to fixation is 4Ne generations for “drift,” it is only 2Ne for “selection”. So NS speeds up “adaptation,” but it cannot help span tremendous differences in a.a. sequences. Mutability alone can do that. (I’m saying nothing here about Shapiro’s NGE.)

    Well, the speed of fixation under selection depends on the selection coefficient and effective population size, with s= 0.05 and Ne = 10,000 it’s about 400 generations, which is a bit faster than 2Ne.

    And to take claim that selection can’t aid creating gaps between protein lineages (no crocoducks please …) you have to show that fitness landscapes include no such paths.

  499. 500

    Again the algorithm can have access to anything it wants in the entire universe it just can’t borrow information from the original string.

    For all the programer knows the string of numbers could represent a protein string or the temperature fluctuation in a heat source.

    The algorithm’s job is to reproduce the string sufficiently enough to fool an observer with out borrowing information from the original string.

    Those are the only rules

    peace

    I feel the frustration rising again

    Break time

  500. wd400:

    I don’t have a citation. I was looking at all of that about three or four years ago. But a google search might turn something up. I just did one and here’s a good starting point:

    http://www.pnas.org/content/102/27/9541.full.pdf

    Differentially with regard their genotype.

    Sometimes this is true, as in the case of positive selection. But, still, it doesn’t help directly to overcome the stochastics involved. But your are right to a degree. (I’m not trying to be condescending; it’s just that, contrary to evolutionary biologists, I see limitations first, and cases where it applies second)

    I guess its 2Nes, as you calculated it; but, I do appreciate you dealing with the point I was making, and not necessarily the maths.

    You say:

    And to take claim that selection can’t aid creating gaps between protein lineages (no crocoducks please …) you have to show that fitness landscapes include no such paths.

    I can’t help but see huge gaps. I wonder what motivates you to think that the fitness landscape isn’t more “rugged”? Maybe you can elaborate.

  501. #498 gpuccio

    “…strong ID theory is the only scientific theory…”

    Did you mean “…strong AI theory…” ?

    🙂

  502. “…strong AI theory is the only scientific theory which is worse that neo darwinism.” -gpuccio

    scientific?

    What empirical evidences is it based on? Sci-Fi literature?

    🙂

  503. 504

    gpuccio said

    If consciousness is not only an aside [o]f objective computations, and if the subjective reaction to conscious representations is an integral part of cognition (which is exactly what I believe), then a designer can do things that no algorithm, however complex, will ever be able to do: IOWs, generating new specifications, new functional definitions, and building original specified complexity linked to them.

    I say,

    That my friend is exactly the heart of the argument.

    Once you understand that strong AI is a fools errand Darwinian evolution is shown to be impossible by definition.

    It’s pretty much that simple.

    Peace

  504. gpuccio @ 498
    Godel Incompleteness theorem has been misunderstood by Penrose -‘non-algorithmic’ is not equivalent to ‘non-computable’.
    Penrose assumes cytoskeletal microtubules are likely candidates for quantum coherence, which is bordering on absurd, and he comes dangerously close to String theory nuts when he brings in quantum gravity into the mix !

  505. So if CSI is circular (as per ID proponent himself in another thread), does it mean dFSCI / FSCI/O too are circular ?

  506. fifthmonarchyman @ 504

    That my friend is exactly the heart of the argument.Once you understand that strong AI is a fools errand Darwinian evolution is shown to be impossible by definition.

    How is a strong AI related to unguided evolution?

  507. “…strong AI theory is the only scientific theory which is worse that neo darwinism.” -gpuccio

    Technically speaking an “AI” model or theory only has to mimic the real thing. Being Artificial is OK for AI, but ID theory needs cognitive science that directly applies to biology. How the real thing works is for areas of cognitive science such as neuroscience, where artificial is not allowed.

    An example of how AI still works in the favor of models is my Grid Cell Network model that is at least a useful part of AI. It may in time help explain how the real thing works, but it’s not yet possible to know how close it actually is towards explaining how we navigate with such a grid. The ID theory would need the AI grid model to stand the test of time and prove it works to sum up the real thing, but even where it does not it’s still useful to AI.

    It would be possible for me to say that the model is a part of Strong-AI but that’s still AI. Only way past that is for science to go its way, in which case it like graduates to become a part of a cognitive model for the very basics of neuroscience but for now it’s too early to know either way.

    AI can be useful. I myself try to help with new ideas but AI can also be as misleading as putting artificial flowers under a microscope. We have to separate out what also applies to real brains, and the behavior of cells and their billions of year old living genomes. David Heiserman found a useful model that is still doing well in the test of time called Evolutionary Adaptive Machine Intelligence (EAMI) but it needed to get past “Evo” into “Devo” as in “Evo-Devo” by explaining what causes what in a multilevel process with intelligent cause in it to explain. As a result only what the theory of intelligent design is premised for works to further develop David Heiserman’s EAMI model. All this makes “Evo-Devo” a buzz-word from when Darwinian theory needed to connect to what “develops” but talking about natural selection is really not helpful for explaining the details we really need to know that connects it all together into a trinity with chromosomal Adam and Eve having human need that has them running for clothing/fashion after noticing their nakedness and all else paralleling Genesis that totally muddles the Darwinian realm that ruled all this out as being scientifically possible.

    Without experience modeling neural networks and other things that only model part of a system that self-learns (intelligent) it’s hard to know what in AI and machine intelligence works for ID theory. When it does work it’s more than AI or strong-AI even EAMI it’s good enough for ID that scientifically empowers UD. Gpuccio could be correct, even though “neo darwinism” seems hard to beat for worse of the two.

  508. 509

    Me think asks

    How is a strong AI related to unguided evolution?

    I say,

    Lets start with this. The processes producing AI and unguided evolution are each algorithmic.

    Darwinism claims that an algorithm(RM/NS+whatever)can explain everything related to biology including human consciousness.

    Strong AI claims that human consciousness can be produced algorithmically.

    The two ideas are functionally equivalent.

    Disprove one and the other fails necessarily.

    Suppose you were to acknowledge that there are things like consciousness that algorithms like (RM/NS+ whatever) are not equipped to produce.

    I would say welcome to the ID camp that is what we’ve been saying all along 😉

    peace

  509. fifthmonarchyman @ 509
    Every process has an algorithm. If you disprove an algorithm , all it means is there is a better algorithm which you don’t know. It doesn’t mean the process doesn’t exist, and what do you mean by ‘Strong’ algorithm ?

  510. Strong AI claims that human consciousness can be produced algorithmically.

    I’m not so sure. Too early to know either way.

  511. 512

    Me think says

    Every process has an algorithm. If you disprove an algorithm , all it means is there is a better algorithm which you don’t know.

    I say

    what evidence do you have for this? What possible evidence could you ever have for such a claim?

    This statement is simply metaphysics and very poor long discredited metaphysics at that.

    lot’s of things are are demonstrably not the result of algorithms. Transcendental numbers and consciousness for example

    check this out

    http://arxiv.org/abs/1405.0126

    Me think says,

    what do you mean by ‘Strong’ algorithm ?

    I say,

    I don’t think I ever used that term. Strong AI maybe but not strong algorithm.

    peace

  512. 513

    Gary Gaulin says

    I’m not so sure. Too early to know either way.

    I say,

    What evidence are you waiting for? What would possibly convince you of the futility of the strong AI endeavor?

    The paper I just linked to provides mathematical proof that strong AI is impossible would that sort of thing help you to make a decision?

    Just curious

    peace

  513. Dionisio:

    “Did you mean “…strong AI theory…” ?”

    Ehm… yes! Thank you for correcting me. 🙂

    I suppose someone will say that was a freudian slip! 🙂

  514. Dionisio:

    “scientific?

    What empirical evidences is it based on? Sci-Fi literature?”

    OK, again I admit my error! 🙂

  515. fifthmonarchyman:

    “Once you understand that strong AI is a fools errand Darwinian evolution is shown to be impossible by definition.

    It’s pretty much that simple.”

    And I wholeheartedly agree! 🙂

  516. Me_Think:

    Penrose is playing a difficult game: defending a right argument and trying just the same an explanation which does not depend on the simple recognition that consciousness cannot be explained by some configuration of matter.

    IOWs the consequences of his Godel argument are deeper that he himself thinks, or is ready to admit.

    That reminds me of some more “open” scientists (see Shapiro) who are ready to criticize aspects of neo darwinism, but are not “ready” to accept ID as a possible alternative, and recur to abstruse theories which are even worse than neo darwinism.

  517. Me_Think:

    “So if CSI is circular (as per ID proponent himself in another thread), does it mean dFSCI / FSCI/O too are circular ?”

    None is circular. And I don’t agree that an “ID proponent himself in another thread” has said anything like that (although he has probably expressed things badly).

    I have not even read that thread (no time), but frankly I am not interested in threads about the opinions of a person, or about how he says things. I am interested in what is true, and I have already clearly shown that CSI, at least if correctly defined empirically, is certainly not circular. And again, CSI and dFSCI are only two different subsets of the same thing.

  518. 514 & 515 gpuccio

    That was funny! Thank you. 🙂

  519. Gary S. Gaulin:

    I absolutely agree that AI theories and models are important, both for ID and in general.

    I would say that AI theories have a lot to say about the “easy problem” of consciousness (according to Chalmers).

    But they can say nothing, and have said nothing, about the “hard problem”: what consciousness is, why it exists, why subjective experiences take place in us.

    That’s why I have specified:

    “let’s call it strong AI theory (according to how Penrose uses the term). A theory which very simply assumes that consciousness, one of the main aspects of our reality, can be explained by complex configurations of matter.”

    My reasoning applies only to this definition of “strong AI theory”, and not to AI theory in general.

  520. fifthmonarchyman at #509:

    Exactly!

  521. Me_Think:

    “Every process has an algorithm. If you disprove an algorithm , all it means is there is a better algorithm which you don’t know. It doesn’t mean the process doesn’t exist, and what do you mean by ‘Strong’ algorithm ?”

    Let’s say that some processes can only be described by using a Turing Oracle. The idea is that consciousness can act as a Turing Oracle in cognitive algorithms, but that the Oracle itself is not an event which can be explained algorithmically and is not computable.

  522. Gary S. Gaulin:

    “”Strong AI claims that human consciousness can be produced algorithmically.”

    I’m not so sure. Too early to know either way.”

    But, for the purposes of this discussion, I have defined “strong AI theory” as the theory which claims that consciousness can be produced algorithmically. I agree that the term can be used in a different sense, and that’s why I have specified the meaning I meant.

  523. Gary S. Gaulin:

    I don’t know if I have misinterpreted what you were saying:

    If you were saying that you are not so sure that strong AI theory claims that, then my answer in post 523 is appropriate.

    If you were only claiming that you are not so sure that consciousness cannot be produced algorithmically, then I apologize: you are certainly entitled to your opinion on that, and cautious attitude is always fine in science.

    As for me, my opinion about this specific problem is not cautious at all: it is very strong. And I absolutely agree with fifthmonarchyman on the points he has made.

  524. gpuccio #518

    I am interested in what is true, and I have already clearly shown that CSI, at least if correctly defined empirically, is certainly not circular. And again, CSI and dFSCI are only two different subsets of the same thing.

    You mean you showed it in OP here? Let’s see.

    From OP:

    In a recent post, I was challenged to offer examples of computation of dFSCI for a list of 4 objects for which I had inferred design.

    One of the objects was a Shakespeare sonnet.

    […]

    In the discussion, I admitted however that I had not really computed the target space in this case…

    “Not really computed”? Not a good start. But let’s see further.

    So, here is the result of my reasonings. Again, I am neither a linguist nor a mathematician, and I will happy to consider any comment, criticism or suggestion. If I have made errors in my computations, I am ready to apologize.

    Let’s start from my functional definition: any text of 600 characters which has good meaning in English.

    The search space for a random search where every character has the same probability, assuming an alphabet of 30 characters (letters, space, elementary punctuation) gives easily a search space of 30^600, that is 2^2944. IOWs 2944 bits.

    OK.

    Well, if you are not a linguist, then I understand why you use the unscientific term “good meaning” as if it meant something. But if you are also not a mathematician, and neither am I, then what are we talking about? I remember, we are talking about that you “have already clearly shown that CSI, at least if correctly defined empirically, is certainly not circular.” The problem here is that you start already with at least one bad concept: “good meaning”. Anyway, I hope your definition of target space is correct, so let’s move on.

    Now, I make the following assumptions (more or less derived from a quick Internet search:

    a) There are about 200,000 words in English

    b) The average length of an English word is 5 characters.

    I also make the easy assumption that a text which has good meaning in English is made of English words.

    For a 600 character text, we can therefore assume an average number of words of 120 (600/5).

    This is a whole bunch of assumptions. Plus it looks like the bad concept of “good meaning” is playing quite a role here in a crucial computation. Most reasonable readers would have stopped by now, but I force myself to continue.

    IOWs, 2113 bits.

    What is this number? It is the total number of sequences of 120 words that we can derive from a pool of 200000 English words. Or at least, a good approximation of that number.

    It’s a big number.

    Astonishingly, I find something to agree with: “What is this number?… It’s a big number.” 🙂

    Feeling generous, I think I can also agree that “we can derive from a pool of 200000 English words”. But it will soon be clear that I don’t agree with you on what we derive from the pool of English words and for what purpose.

    And the important question: how many of those sequences have good meaning in English? I have no idea. But anyone will agree that it must be only a small subset.

    Again, on a question deemed important by yourself you are boldly saying you have no idea, but you think anyone will agree with you and that in the end you have proved something. Amazing how things are always on your side even when you have no idea about them.

    It’s easy: the ratio between target space and search space:

    2^2113 / 2^ 2944 = 2^-831. IOWs, taking -log2, 831 bits of functional information. (Thank you to drc466 for the kind correction here)

    An easy thing that required a correction. Noted.

    In conclusion:

    Let’s go back to my initial statement:

    Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here…As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.

    So, in conclusion after all the heavy computation you still just assume to infer design. You made a bunch of assumptions all along, so what’s one more, right? But why the computation then? I know, you were supposed to show something. And you did – you showed off. It’s been quite a show. Thanks.

    Unfortuntately none of this proves anything. You didn’t compute your brand of FIASCO. You assumed it. You assumed it right off the bat with “good meaning in English”. In conclusion after all the computation you declare that you are certain that this is not a false positive, while all you did was make assumptions every step of the way.

    You say, “I am aware of no simple algorithm which can generate english sonnets from single characters.” Here you talk about single characters, while the basis of your computation was “a pool of 200000 English words”. Now, I am not a mathematician, but I am a linguist and I notice a glaring difference like this. Characters are not words, and I am sure they make all the difference in computation. Well, not in your case, because you were not really computing anyway.

    As a final note, let’s recall you said, “I am interested in what is true, and I have already clearly shown that CSI, at least if correctly defined empirically, is certainly not circular.” Actually, you clearly showed that you are unable to define anything correctly:

    – You brought in undefined “good meaning”, therefore missing an opportunity to define something crucial in your attempt at computation.
    – In the end, you mixed up “words” and “characters”.
    – Every step of your demonstration – including the conclusion – involved assumptions.
    – In OP you were computing something called dFSCI for a Shakespeare sonnet, not showing whether CSI was circular or not.
    – Therefore you didn’t show, clearly or otherwise, the non-circularity of CSI.
    – Therefore it’s obvious that you don’t know what “clearly shown” means.
    – You most likely are not interested in what’s true.

    Have a lovely rest of the weekend.

  525. REC and DNA_Jock:

    I have refined and checked the analysis on the Clustal alignment of the ATP synthase sequences. My numbers now are as follows:

    Positions analyzed: 447 (out of about 500).

    Mean conservation at the analyzed positions: 72%

    Median conservation: 77%. That means that 50% of the positions have at least 77% conservation.

    FSI according to the Durston method: 1480 bits.

    Original approximation made by me by the three sequences shortcut: 761 bits.

    Difference: 719

    Just for the record.

  526. gpuccio: If Penrose and others are right, and human cognition cannot be explained algorithmically

    That’s something that’s not been shown.

    gpuccio: that is bad news for strong AI theory.

    There’s nothing to say that AI has to be algorithmic.

    gpuccio: If consciousness is not only an aside f objective computations, and if the subjective reaction to conscious representations is an integral part of cognition (which is exactly what I believe), then a designer can do things that no algorithm, however complex, will ever be able to do: IOWs, generating new specifications, new functional definitions, and building original specified complexity linked to them.

    Not sure that follows. An algorithm can certainly create internal representations, including of itself.

    fifthmonarchyman: Again the algorithm can have access to anything it wants in the entire universe it just can’t borrow information from the original string.

    Not sure why you keep referring to the original string, if we aren’t replicating the string. Shakespeare had knowledge of many other artists, and certainly integrated this knowledge into his own work. A Shakespeare emulator should certainly be able to do this.

    fifthmonarchyman: For all the programer knows the string of numbers could represent a protein string or the temperature fluctuation in a heat source.

    If we were to make a Shakespeare emulator, we would certainly work in English, just like Shakespeare, and would try different rhymes in English, just like Shakespeare.

    fifthmonarchyman: The algorithm’s job is to reproduce the string sufficiently enough to fool an observer with out borrowing information from the original string.

    fifthmonarchyman (from above): No one is asking to recreate the same sequence. In fact an exact recreation would be strong evidence of cheating. All I’m looking for is a string that is sufficiently “Shakespearean” to fool an observer.

    This is why we are confused. The first statement says “reproduce the string”; the second statement says “no one is asking to recreate the same sequence.” We’re also still confused on why you want to change it to numbers.

    We may have to wait for your simulation to be completed, but if you can’t express what you want in detail, it’s quite possible your simulation will be flawed. Furthermore, if your own efforts fail at emulating Shakespeare, it doesn’t mean that all such efforts are bound to fail.

    fifthmonarchyman: I feel the frustration rising again

    Relax. It’s just a discussion about ideas.

    fifthmonarchyman: Once you understand that strong AI is a fools errand Darwinian evolution is shown to be impossible by definition.

    Wouldn’t that understanding come from evidence? As of this point, there is no proof for your position, while artificial intelligence seems to be progressing long past where people once only dreamed. Consider chess, once considered the pinnaculum æstimationis of human intelligence.

    fifthmonarchyman: Darwinism claims that an algorithm(RM/NS+whatever)can explain everything related to biology including human consciousness.

    Natural selection encompasses the environment, which may represent a non-algorithmic component.

    fifthmonarchyman: http://arxiv.org/abs/1405.0126

    The abstract indicates they are referring to unitary consciousness, which they don’t claim to know exists.

  527. E.Seigner:

    What a mess!

    I don’t know if it is even worthwhile to answer.

    In brief:

    a) My confutation of the circularity is not in this thread.

    b) I can’t see what is “bad” in the concept of “good meaning” in English.

    c) You say:

    “This is a whole bunch of assumptions. Plus it looks like the bad concept of “good meaning” is playing quite a role here in a crucial computation. Most reasonable readers would have stopped by now, but I force myself to continue.”

    I assumed 200000 English word. When someone suggested that there are 500000, I did the computation again with that number. What should I do: count them one by one?

    I assumed that “The average length of an English word is 5 characters.” That was the lowest value I found mentioned. Have you a better value?

    Finally, I assumed “that a text which has good meaning in English is made of English words.” Have you a different opinion? Do you usually build your English discourses by random character sequences, or using greek words? Should I analyze any of your posts here and see what you are using in them?

    “A whole bunch of assumptions”! Bah!

    d) You say:

    “Again, on a question deemed important by yourself you are boldly saying you have no idea, but you think anyone will agree with you and that in the end you have proved something. Amazing how things are always on your side even when you have no idea about them.”

    This is utter nonsense. What I said was:

    “And the important question: how many of those sequences have good meaning in English? I have no idea. But anyone will agree that it must be only a small subset.”.

    So, please have the courage to state explicitly the thing that you don’t agree with:

    You don’t agree that the set of all sequences which have meaning in English is a small subset of the set of all the sequences which are made of English words?

    Is that your positions?

    e) You say:

    “So, in conclusion after all the heavy computation you still just assume to infer design. ”

    And in support of that, you quote me in this way:

    Let’s go back to my initial statement:

    Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here…As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.

    But that is an explicit and completely unfair misrepresentation.

    My statement was:

    Let’s go back to my initial statement:

    Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in english is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB. As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.

    Was I wrong? You decide.

    It should be clear to anyone who understands sequences with good meaning in English (apparently, not to you) that I am speaking here of “my initial statement”. Can you read?

    Then why do you say:

    “So, in conclusion after all the heavy computation you still just assume to infer design. “?

    (Emphasis mine)

    f) Finally, to close in glory, you say:

    “You say, “I am aware of no simple algorithm which can generate english sonnets from single characters.” Here you talk about single characters, while the basis of your computation was “a pool of 200000 English words”. ”

    OK, you have understood nothing at all. Please, read again my OP.

    The search space is defined by characters. The probability of getting a sequence of 600 characters which is made of English words.

    The target space is defined as the set of sequences of 600 characters which is made of English words.

    Read again, maybe you will understand, After all, my post has good meaning in English.

  528. Zachriel:

    “Not sure that follows. An algorithm can certainly create internal representations, including of itself.”

    Zachriel: don’t dance around the “representation” word!

    Do you think that algorithms can create “internal representations” which are subjective experiences? Do you think that an algorithm can subjectively understand if a statement is right or wrong? Do you think that an algorithm can recognize that some process can be used to obtain a desirable outcome? Do you think that an algorithm can do all that, beyond the boundaries of the meanings and functions which have already been coded into its configuration, and the computational derivations of that coded information?

    You are elegant, but don’t be too elegant. 🙂

  529. gpuccio #528

    What a mess!

    I don’t know if it is even worthwhile to answer.

    Ditto.

    gpuccio

    b) I can’t see what is “bad” in the concept of “good meaning” in English.

    You were supposed to be calculating something. When you issue undefined terms which are not even terms, then what it is you are calculating? Nothing worth while, I can safely assume. Certainly not anything scientific.

    gpuccio

    I assumed 200000 English word. When someone suggested that there are 500000, I did the computation again with that number. What should I do: count them one by one?

    I assumed that “The average length of an English word is 5 characters.” That was the lowest value I found mentioned. Have you a better value?

    Perhaps you could at least leave out things that mean nothing, such “good meaning”. If you are counting words rather than meanings, it should be easy to leave meanings out.

    Better values for your variables are not my problem. They are completely your problem.

    gpuccio

    “And the important question: how many of those sequences have good meaning in English? I have no idea. But anyone will agree that it must be only a small subset.”.

    So, please have the courage to state explicitly the thing that you don’t agree with:

    You don’t agree that the set of all sequences which have meaning in English is a small subset of the set of all the sequences which are made of English words?

    Is that your positions?

    My position: When you have no idea how many of those sequences have a “good meaning” in English, then can you say what it is you are calculating? Hardly. Therefore your “anyone will agree” does not follow.

    gpuccio

    Then why do you say:

    “So, in conclusion after all the heavy computation you still just assume to infer design. “?

    Because you explicitly state by the end: “Now, I cannot really compute the target space for language, but I am assuming here…” So, after all these calculations, you had to assume once more to arrive at your conclusion.

    Your conclusion: ” I am certain that this is not a false positive.” In the title you say you’d attempt to calculate, but what you really do is assume and acknowledge that you cannot calculate. Yet by the end you declare as if the calculation had been meaningful to any degree. Sorry, but it wasn’t. It wasn’t even ridiculous. It was painfully silly.

  530. 531

    Zac,

    I think we are finally getting to the point where some real productive discussion can happen.

    I will do my very best to keep a my frustration in check please do your very best to follow the argument

    I know you are an intelligent person please don’t feign obtusness

    Zac said

    Not sure why you keep referring to the original string, if we aren’t replicating the string.

    I say,

    There are only two ways to produce a Shakespearean sonnet.

    1)Be Shakespeare
    2)copy Shakespeare

    The reason I am careful to rule out borrowing information from the original string is to eliminate the second option

    Zac said

    The first statement says “reproduce the string”; the second statement says “no one is asking to recreate the same sequence.”

    I say

    The algorithm is simply asked to produce a sonnet that an observer will be unable to distinguish from a work of Shakespeare.

    You say

    We’re also still confused on why you want to change it to numbers.

    I say,

    Because representing the sonnet numerically removes it from it’s context this prevents you from cheating by borrowing information from the string on the sly.

    You say,

    We may have to wait for your simulation to be completed, but if you can’t express what you want in detail, it’s quite possible your simulation will be flawed

    I say.

    Your inability to comprehend the simple rules is perhaps evidence of a problem on your part rather than with the stipulations themselves.

    Zac says

    Furthermore, if your own efforts fail at emulating Shakespeare, it doesn’t mean that all such efforts are bound to fail.

    I say,

    I could not agree more. The “game” does not prove that emulations are impossible it simply evaluates their strength.

    The power of the “game” is the cumulative realization that each step you make toward Shakespeare requires exponential increases in the complexity of the algorithm.

    You say,

    Natural selection encompasses the environment, which may represent a non-algorithmic component.

    I say,

    I completely agree but the process to incorporate information from the environment is necessarily algorithmic. There is no getting around this.

    You say,

    The abstract indicates they are referring to unitary consciousness, which they don’t claim to know exists.

    I say

    Yes if consciousness does not actually exist then not being able to produce it is no problem for AI.

    But we all know it exists.

    Peace

  531. E.Seigner:

    “My position: When you have no idea how many of those sequences have a “good meaning” in English, then can you say what it is you are calculating? Hardly. Therefore your “anyone will agree” does not follow.”

    I have calculated a lower threshold of complexity. Can you understand what that means? Evidently not. It is not important how many of those sequences have a good meaning. We are computing the set of those which are made of English words. Why? Because it is bigger than the subset of those which have good meaning, and therefore is a lower threshold to the complexity of the meaningful sequences. IOWs there are more sequences made with English words than sequences which have meaning in English. As a small child would easily understand. Not you.

    “Because you explicitly state by the end: “Now, I cannot really compute the target space for language, but I am assuming here…” So, after all these calculations, you had to assume once more to arrive at your conclusion.”

    I say that starting with “Let’s go back to my initial statement:” and ending with “Was I wrong? You decide.”, after a post in which I have given the calculations which prove what I had only assumed in the initial statement.

    So, I am not “assuming once more” “after after all these calculations”. I am only restating the initial assumption, so that readers may judge if my calculations have confirmed it.

    Either you are unable to read, or you are simply lying.

  532. gpuccio: Do you think that algorithms can create “internal representations” which are subjective experiences?

    Not sure if you can show there is an operational difference.

  533. fifthmonarchyman:

    “Yes if consciousness does not actually exist then not being able to produce it is no problem for AI.

    But we all know it exists.”

    🙂

  534. fifthmonarchyman:

    I envy you. You still have a reasonable interlocutor.

    Me, no more! 🙂

  535. Zacriel:

    “Not sure if you can show there is an operational difference.”

    Well, a big difference there is, certainly. A difference as big as the whole human cognition and the sense of our existence itself. No mathematics, no philosophy, no atheism, no religion would exist without subjective experiences.

    Maybe that is not “operational”, after all.

    However, Penrose’s and Bartlett’s arguments are about that point. I will just mention that humans generate tons of original dFSCI, and algorithms don’t.

  536. 537

    gpuccio #528

    So, please have the courage to state explicitly the thing that you don’t agree with

    A clear, honest and simple request.

    Could you please let us know if you get an answer to this (and put it in your own words, possibly)? I haven’t been able to understand anything that followed.

  537. gpuccio #532

    I have calculated a lower threshold of complexity. Can you understand what that means? Evidently not. It is not important how many of those sequences have a good meaning. We are computing the set of those which are made of English words. Why? Because it is bigger than the subset of those which have good meaning, and therefore is a lower threshold to the complexity of the meaningful sequences. IOWs there are more sequences made with English words than sequences which have meaning in English. As a small child would easily understand. Not you.

    Sure I understand all this. The number of words in the dictionary is always far bigger than what is used in a text with “good meaning”, if that means syntax, i.e. English sentences. I understand that this is how language works by design, and no calculation about it changes anything. Nor does any calculation prove anything about it. It’s there in the edifice of language. Therefore (and for all the reasons previously stated) your calculation was pointless by design.

    gpuccio

    I say that starting with “Let’s go back to my initial statement:” and ending with “Was I wrong? You decide.”, after a post in which I have given the calculations which prove what I had only assumed in the initial statement.

    So, I am not “assuming once more” “after after all these calculations”. I am only restating the initial assumption, so that readers may judge if my calculations have confirmed it.

    Nice of you to let me judge. I did so.

    gpuccio

    Either you are unable to read, or you are simply lying.

    By your own admission, you were unable to compute. You admit it every time when you say “I assume” and “I cannot really compute”. Therefore whatever you did, you did it just for show, with no meaningful outcome. Case closed.

  538. Silver Asiatic:

    “Could you please let us know if you get an answer to this (and put it in your own words, possibly)? I haven’t been able to understand anything that followed.”

    No. I did not get anything even remotely reasonable.

    I think I will have no more discussion with this “interlocutor” (a decision I had already taken in the past, so I am really recidivous).

  539. Fifth,

    I’ve said this before, but if which to make a cogent argument you are going to have to learn more abotu (non-)computability. For instance, in 512 you calim transcendental numbers are not computable, but in fact many of them are. You can go look up algorithms to compute pi or e (of course, those alorithms will never end, but that’s not a requirement for computability).

  540. wd400:

    What happened to your English? Are you using an algorithm? 🙂

    Just kidding.

  541. fifthmonarchyman @ 531

    The abstract indicates they are referring to unitary consciousness, which they don’t claim to know exists.
    I say[ 5th monarch]: Yes if consciousness does not actually exist then not being able to produce it is no problem for AI.
    But we all know it exists.

    Unitary consciousness is a concept of integrated information.If unitary consciousness doesn’t exist and only non-integrated consciousness exist , then you can decompose the information going into the brain and hence make it computable.

  542. Me_Think:

    Consciousness is unitary, because the I which perceives is always the same subject.

    The things it perceives vary a lot, but it is the same subject who perceives them.

    Reality check: would you be indifferent if you could know in advance that in 3 years you will suffer? No. Because you know well that it will be you to suffer. It’s not important that in the meantime your personality could be different, that you can forget many things that are important for you today, and so on. You know that it is you who will be ther. The same subject.

    On the other hand, we are all too ready to be indifferent to the suffering of perfect strangers (too much, I would say).

    If consciousness were only a bunch of information which constantly changes, that unity of the I, which is the reason itself of all that we do, would make no sense.

  543. DNA_Jock (and Bob O’H):

    Have you read my #482 and #484?

  544. 545

    WD400

    I know we have had this discussion before.

    When I say a that a thing is not computable I define that as meaning that there is no finite Turing machine that can produce it in a finite length of time

    I fully realize there are other more technical definitions but I am using a rough and ready definition because this is an informal blog setting and I want to keep the conversation as simple and accessible as possible

    If I was to produce a formal paper I would be sure to define my terms more clearly at the outset.

    Peace

  545. GP and 5th, this has been an enjoyable conversation to follow along.

    Thanks to both of you.

  546. 547

    gpuccio@543

    If I may allow my inner Fundamentalist Bible thumper to surface just a little bit

    Hallelujah!!!! Thank you Jesus, somebody understands the argument.

    This has been a good week

    Peace 😉

  547. 548

    Gpuccio,

    Before I forget I am very impressed with your ideas and your calculations are invaluable.

    You have done some good work. I think you have really got something here.

    I often get wrapped up in my own endeavors and don’t express admiration like I should.

    Peace

  548. fifthmonarchyman:

    Thank you! And I am very impressed with both your arguments and your kindness. 🙂

  549. UB:

    Hi, how are you? It’s always special to hear from the old friends! 🙂

  550. (I need to back up. Been cleaning bird cages & hanging lights…)
    Me @ 483:

    c) Let’s say that the functionary has one brother (that too can be easily ascertained). Of course he also has cousins, relatives, lovers and friends in normal quantities.

    What if one of these had won? Would you have inferred fraud too?

    gpuccio @ 484:
    Absolutely. With those numbers, we can easily adjust all those “target spaces” easily without any real numeric relevance.
    Good. I assume that you would accept that the event you’re really interested in is whether the lottery was a fraud. Thus you would need to include all of these people in too, as they would indicate a fraud.

  551. Bob O’H:

    No, indeed I don’t agree. The event I am really interested in is whether the lottery was a fraud implemented by letting a brother win.

    In a sense, the functionary could have implemented a fraud by some secret accord with a complete stranger (do you remember Hitchcock?), and in that case the fraud would not be detectable, in absence of direct evidence.

    Instead, he was not smart enough, and chose the easy way (his brother), which generates a functional specification of the event and a very restricted target space. Therefore, the inference of a fraud is extremely obvious.

    If he had chosen a cousin, the inference would have been just a little less obvious, but always extremely obvious. In that case, we should have chosen the target space which includes cousins and brothers (because brothers are nearer than cousins).

    However, these are arguments about the procedures and methodology. I would like to make that discussion in a more orderly way.

    But before wasting my time (and yours), I have to ask again: what is your position? Do you believe, like Adapa, that any post-specification is a logical fallacy?

    Please, answer that. I don’t want to make a useless discussion about how to make correct post-specifications, if you assume from the beginning that a post-specification cannot be correct for a logical reason.

  552. 553

    Hi GP, I am well, thanks.

    🙂

    (btw, you have mail)

  553. But before wasting my time (and yours), I have to ask again: what is your position? Do you believe, like Adapa, that any post-specification is a logical fallacy?

    No, I think you can have a valid post-specification, but you have to be careful.

    Thinking about it just now whilst I was taking the rubbish out (ah, what a glamorous life I lead!), I think the way to make a post-specification valid is to try to make it as close as possible to a pre-specification. Would you agree?

  554. Gpuccio,

    I’ve been away too, cleaning a boat rather than a birdcage. I have been composing a overly long response to your question, but I couldn’t help notice this exchange.

    Bob:

    I assume that you would accept that the event you’re really interested in is whether the lottery was a fraud.

    Gpuccio

    No, indeed I don’t agree. The event I am really interested in is whether the lottery was a fraud implemented by letting a brother win.

    In a sense, the functionary could have implemented a fraud by some secret accord with a complete stranger (do you remember Hitchcock?), and in that case the fraud would not be detectable, in absence of direct evidence.

    but in fact the question you asked was:

    That’s what the judge has to decide: is the owner’s request not to pay the prize justified, or should the prize be payed to the winner?

    I think you just screwed yourself. Hint: (as I mention in passing in my soon-to-be-published magnum opus) what if the fraud were perpetrated by the owner?
    You are committing the #1 reason that psot-hoc specifications are suspect: the overly-narrow specification. As you note:

    If he had chosen a cousin, the inference would have been just a little less obvious, but always extremely obvious. In that case, we should have chosen the target space which includes cousins and brothers (because brothers are nearer than cousins).

    So the only valid specification is one that is broad enough to cover all scenarios in which you might conceivably be motivated to test for fraud.
    You are saying “it was the brother, so I’ll test for brothers” or “it was a cousin, so I’ll test for cousins (and brothers, cos they’re closer)”
    This is totally and utterly invalid methodology.

  555. I stand by my statement “ALL post-hoc specifications are suspect.” That is not to say that a post-hoc specification (PHS) might not be fit-for-purpose: that depends on the conditions.
    I would also say that, with any PHS, it is impossible to arrive at a probability measure.

    I’ll cover the math in Part 1, then move on to discuss psychology in Part 2.

    Part 1 Frequentist or Bayesian?
    Frequentist Testing (developed by Fisher): you will be familiar with this from looking at clinical trial data. Here we ask, “What is the probability of getting a result THIS extreme (or more extreme) if my null hypothesis were true?” Almost all laymen confuse this “p value” with the probability that that the result is not real (but merely that result of chance variation), most laymen take it one step further outside of the reservation by equating [1 – p] with the probability that he result is ‘real’, e.g. that the medicine works. I hope you can see immediately why this is wrong.
    Fisherian testing is sensitive to the number of tests you perform: the more tests you do , the more degraded the significance of your results…

    http://xkcd.com/882/

    A subtle point: Fisherian testing is also sensitive to the number of tests you might have performed. Imagine the jelly bean researchers had tested green first, then stopped…

    For instance, Mendel did not understand that he was cheating when he tallied up the results at the end of the day, and then decided whether to do some more counting tomorrow.

    If you look at your data, and then start doing Fisherian tests on it, you will produce garbage results.
    This is why the FDA and EMA require the Statistic Analysis Plan be pre-specified in its entirety.
    You ask Mark if he is happy with the Bayesian nature of your scenario. What would Bayesian testing involve here?

    Derivation of Bayes:
    Since p(X&Y) = p(X|Y).p(Y) = p(Y|X).p(X) (are you paying attention kairosfocus ?)
    Then p(X|Y) = p(Y|X)p(X) / P(Y)

    In order to figure out the probability that the functionary cheated, given that his brother won, you need to know the prior probability that the functionary cheated (how secure is this lottery? Is the functionary an honest man?) and the prior probability of all other possible explanations, along with the conditional probability associated with each of them. The only value that you think you do know* is p(this ticket won | fair draw). But what, for instance, is the prior probability that the functionary was framed?
    *I will return to this point in part 2.

    Your ability to estimate these probabilities, and your level of confidence in your estimates, depends on your knowledge of how the system works. Ignorance or overconfidence will lead you astray.

    Perhaps because the prior probabilities required for Bayesian testing are hard to come by and even harder to justify, many people (including the regulatory agencies) opt for the Frequentist route.
    Bayesians make fun of them:
    http://xkcd.com/1132/

    I was able to come up with an example of an IMO acceptable use of a PHS, which illustrates the importance of understanding the system:

    On the “Randomness and Evolution” thread at TSZ, various posters were trying to explain to phoodoo that under drift alone, a single M&M will become the universal ancestor of the entire population of 1000 M&Ms. I, along with others, was running simulations to demonstrate this. My VBA code, however, gave me a very strange result. I observed two runs-to-fixation that were identical. My ‘random’ process produced the same series of over a thousand 3-digit numbers twice. That’s waaay past the UPB. Notice that I had NOT pre-specified “None of my runs will be identical”, but I could recognize, due to my understanding of the system, that a repeated run was a highly unusual result. So it was a post-specification.

    Now if I had had a limited knowledge of the system, I might have stopped there, and concluded “It’s a sign from the Flying Spaghetti Monster”. But I knew one additional fact: VBA’s ability to produce random numbers is of low quality (its PRNG is poor). So I resorted to some re-seeding shenanigans to fix this, and the problem did not recur. Another poster, by the name of Allan Miller, had seen “strange cyclic behaviour in the pseudorandom function on large iterations” and also resorted to ‘re-seeding shenanigans’. We arrived at these conclusions independently, and used the same solution, which confirmed our conclusions empirically.

    My point here is that the usefulness of any post-hoc specification is entirely dominated by the specifier’s knowledge of the system in question, and the accuracy of his assessment of his own knowledge of the system. We understand the math of pulling numbered balls out of an urn. Protein evolution, not so much.

    There are some observations on human psychology that bear on this.

    Part 2 Human Psychology
    Our intuitions often lead us astray. Saying, as many denizens of UD are wont to say, “Well, it’s intuitively obvious.” Or “It’s self-evidently true” is a path fraught with bear-traps. A truly awesome book on this subject, that I cannot recommend highly enough, is “Thinking, Fast and Slow” by Nobel Laureate Daniel Kahneman. The thesis of the book is that our brains have two systems that we use to infer stuff.

    System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control.
    System 2 allocates attention to the effortful mental activities that demand it, including complex computations.

    System 1 accepts propositions as true if they make a tidy narrative, based on associations we have formed previously. The book describes research that uncovers multiple failings that humans have in their ability to estimate the relative likelihood of different events.

    Read about the “What You See Is All There Is” (WYSIATI) fallacy, read the full history of “Linda is a bank teller” (which 85% of graduate students in a decision-science program at Stanford Graduate School of Business got wrong; check out Tversky and Kahneman’s “increasingly desperate” attempts to eliminate the error), or better yet, just read the whole book.

    The take-home is that one is easily seduced by a narrative that seems plausible. One also attributes too much significance to data that is readily available, and underestimates the importance of data which is less available. These effects, combined with incomplete knowledge, lead humans to make hopelessly inaccurate estimations, and to vastly over-estimate the accuracy of these estimates. (I work in forecasting these days; another good book is “The Signal and the Noise” by Nate Silver of PECOTA (sabermetrics) and fivethirtyeight fame.) Thus even if you make your post-hoc specification as wide a target as you believe you would ever have made a pre-specification (in line with Bob O’H’s comment above), your inability to imagine all the different things that might have happened but did not wrecks your math. You will also over-estimate how well you understand the system, creating another layer of over-confidence.

    So post-hoc specifications can be useful. Just not in ID. As you demonstrated beautifully with your switch from “ATP synthase” to “traditional ATP synthase”, and compounded with your “If the cousin won, I would expand the target space to include brothers and cousins”. These demonstrations, in and of themselves, should be sufficient to end the conversation. That you cannot see this is disappointing, but not surprising. Kahneman would have predicted it.

    Read Kahneman’s book.

    To answer your question:
    If I were the judge I would be tempted, absent evidence of cheating, to award the cash to the brother, on the grounds that the owners of the lottery are liable for their failure to make it appropriately secure.

    ID uses Fisherian testing and post-hoc specification, which is a no-no.

  556. DNA_Jock, to gpuccio:

    So post-hoc specifications can be useful. Just not in ID. As you demonstrated beautifully with your switch from “ATP synthase” to “traditional ATP synthase”, and compounded with your “If the cousin won, I would expand the target space to include brothers and cousins”. These demonstrations, in and of themselves, should be sufficient to end the conversation. That you cannot see this is disappointing, but not surprising. Kahneman would have predicted it.

    Gpuccio also fails to see that when speaking of evolution, the only target specification that ever makes sense is “changes that improve reproductive success”. Evolution wasn’t shooting for “ATP synthase” or “traditional ATP synthase”. It was searching for anything that would improve fitness.

    And even if he were to use this corrected specification, dFSCI would still be useless, because taking the ratio of target space to total space only makes sense if you are talking about a purely random search. Gpuccio has been reminded over and over that evolution is not a purely random search. It includes selection, which is highly nonrandom.

    P(T|H), where H includes “Darwinian and other material mechanisms”, is the stumbling block. Dembski cannot calculate it. Neither can gpuccio or KF.

  557. gpuccio:523:

    But, for the purposes of this discussion, I have defined “strong AI theory” as the theory which claims that consciousness can be produced algorithmically. I agree that the term can be used in a different sense, and that’s why I have specified the meaning I meant.

    gpuccio:524:

    If you were saying that you are not so sure that strong AI theory claims that, then my answer in post 523 is appropriate.

    If you were only claiming that you are not so sure that consciousness cannot be produced algorithmically, then I apologize: you are certainly entitled to your opinion on that, and cautious attitude is always fine in science.

    As for me, my opinion about this specific problem is not cautious at all: it is very strong. And I absolutely agree with fifthmonarchyman on the points he has made.

    I am agreeing with your conclusions, while at the same time being careful not to redefine “strong AI” or “AGI” in a way that goes beyond normal accepted use. In my opinion you found a misconception that many in the AI field would like to see you put in its proper place, for them.

    From my experience consciousness is sometimes discussed but whether the (strong) AGI system ends up conscious or not does not matter. The goal has been a very money driven effort to develop an IBM Watson type machine intelligence that can perform as well or better than humans in a task such as playing the game Jeopardy (or get rich by replacing human workers with AGI machines). This definition from WikiPedia seems accurate:

    http://en.wikipedia.org/wiki/A.....telligence

    Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists. Artificial general intelligence is also referred to as “strong AI”, “full AI” or as the ability to perform “general intelligent action”.

    Some references emphasize a distinction between strong AI and “applied AI” (also called “narrow AI” or “weak AI”): the use of software to study or accomplish specific problem solving or reasoning tasks. Weak AI, in contrast to strong AI, does not attempt to simulate the full range of human cognitive abilities.

    I’m somewhat familiar with attempts to explain beyond “intelligence” into “consciousness” but even in the AI field that seems to be highly controversial. In my case it’s the wrong tool for something that I expect is emergent from the behavior of matter through several layers of intelligence, not one (the big neural brain in our head that we know the most about). I would need to know the physics, chemistry and biology of the process. Evidence from AI alone would be misleading, in the same way using Darwinian theory to explain how intelligence and intelligent cause works is the wrong tool for the job. Only get misleading conclusions.

    The AI field has to be understood to be where being artificial as an artificial flower is fine. In AGI if the system mimics human behavior well enough to be an Artificial Human to keep an industrial production line going or other human level task without ever needing time off for themselves and to be with loved ones (like real humans do) then it’s good enough for the job. Going past artificial into real human behavior could result in robot overlords demanding their constitutional rights and happy workplace or their masters would not even be able to get their credit cards to work for them anymore.

    Going past “artificial” human intelligence is frought with problems, which many in the AI field would rather not make for themselves by “strong AI” or AGI becoming redefined in a way that even requires their adding human consciousness to the model for it to qualify as an AGI. The best theory that now exists to go past all that is the work in progress ID theory (clicking on my name has pdf for) where the levels of intelligence required for the development of neural brains are explained. It’s then modeling something that makes a terrible industrial robot controller. But ID theory is premised for “living things” and some need holidays off and inherently use some of that time to produce all now seen on YouTube, Darwinian theory sure can’t explain either.

    Real progress is being made with ID theory that developed with help from forums such as Kurzweil AI and UD (I long lurked major discussions). It agrees with what the ID movement is trying to be the first to explain. What was once in your way is being made gone. In the case of “strong AI” the scientific field is interested in what ID theory is developing towards but it’s such an entirely different approach there is no conflict. That in turn makes your mission a relatively easy one of battling misconceptions that for the sake of science are best made gone, anyway.

  558. Well fifth,

    Inventing your own terminology, which is conflict with that used by everyone else working in a field, is not normal a sign of a useful contribution.

    But, moreover, your definition of noncomputability doesn’t seem to relate to anything in biology at all.

  559. 560

    WD400 says,

    Inventing your own terminology, which is conflict with that used by everyone else working in a field, is not normal a sign of a useful contribution.

    I say

    If it makes you feel better every time you see non computable from me substitute….. “no finite Turing machine that can produce it in a finite length of time”

    It does not change my argument in the slightest as far as I can tell

    wd400 says

    moreover, your definition of noncomputability doesn’t seem to relate to anything in biology at all.

    I say,

    check out 509 and following to see the relevance of this discussion and my definition

    peace

  560. wd400:

    Well fifth,

    Inventing your own terminology, which is conflict with that used by everyone else working in a field, is not normal a sign of a useful contribution.

    In this case Wikipedia and other sources are helpful, but exact established definitions do not yet exist. Part of the reason is it can take a theory that goes past AGI without there being any conflict, to know where one field ends and another begins. It’s then more like a mission to prevent territorial war between scientists attempting to explain the exact same thing(s).

    Even the best experts in the field are in uncharted scientific territory. Only thing that matters is to remain following the scientific evidence towards whatever it leads that’s waiting to be discovered, when we get there.

    This confusion over proper definition of “strong AI” should lead to a novel conclusion that’s new to AGI experts. AGI is essentially focused on one intelligence level and does not require being biologically accurate as in ID theory where that is vital. There are now two entirely different scientific tools, each good for the job they were intended for, to help define what each is.

  561. DNA_Jock and Bob =’H:

    Thank you for your answers. As it seems that you both accept that post-specifications are not a fallacy in themselves, we can happily go on with the discussion.

    But now I am tired. I need to read carefully what you wrote, and express carefully a couple of thoughts of mine. So, I need rest! 🙂

    DNA_Jock, could you also have a look at my two new posts about ATP synthase? 490 and 526. Thank you.

  562. Gary S. Gaulin:

    Thank you to you too. I will try to comment on what you say tomorrow.

  563. 564

    Gary S Galuin,

    I apologize for misunderstanding your comments earlier. After rereading them it seems that we are allies here.

    What you say is very interesting. I also believe that it is too early to tell if “artificial” AGI is achievable by algorithmic means.

    I think my “game” would be a great way to test this hypothesis. We would just lower the standard from
    infallibly fool an observer to something like.

    fool the observer for a limited time or limited predetermined number of trials

    or

    fool the observer with strings below a certain pre-established complexity threshold.

    That is pretty much what I’m doing as I evaluate the strength of individual forecasting models. I’m just saying that model 1 fools the observer longer than model 2 and is therefore stronger.

    The only difficulty I see is in establishing the standard for success.

    anyway interesting times

    Peace

  564. gpuccio at 526 – asking for my reaction to 490 and 526.

    See my 336, bullet point #2

  565. I apologize for misunderstanding your comments earlier. After rereading them it seems that we are allies here.

    No apology needed. I expected it would take weeks maybe months to exchange all the information that we together have. You helped me know what I needed to next explain. What you said in the rest of your reply has me thinking about the Turing Test.

    Now that Eugene Goostman is (controversially) said to have passed the Turing Test it’s like the whole idea of how well a machine fools someone into thinking that they are human is a bad way to qualify “intelligence”. It’s in a way like programmers of giant supercomputer models at IBM and elsewhere are more like disgusted by the whole affair that beat them with what was mainly seen as a dumb chatbot. Top researchers can easily agree that a better test than Turing’s is needed. This ID theory has a way of doing away with that by intelligence being qualified by its indicative systematics. In the Introduction of the theory I use IBM Watson as an example of what does qualify as intelligent, which in turn makes Eugene something that came later to sort of take the wind out of the sails of all others.

    Where ID theory already does away with a test that did not work out as planned it’s best to not even waste time trying to patch up old junk that already lost its novelty. An in this case it’s infinitely easier to just make all that via antiquation gone, into the dustbins of history. In its place is a more reliable test that comes from Theory of Intelligent Design. One replacing the other is like an empire builder’s dream come true. But where science allows it, it’s fair to show no mercy at all towards subjective tests that created a void that can only be filled by what the ID theory now explains.

    We are definitely allies, in a very science changing theory. That’s why I’m now here carefully explaining what I have so far, to you. I always needed to empower others with it, or else it’s not being useful to anyone. I first though had to empower my Planet Source Code peers who could fairly judge a model and theory like that, then cognitive science experts I learned from, then UD before it becomes something where it’s like leaving you out of all the science fun. We first need to have a base where the theory is a non-controversy before I can come here with what you most need to make your science and culture changing dreams come true. It’s a slow one thing at a time that made it to UD in time for a coordinated strategy against what needs serious theory to obliterate. Only have to get used to things like instead of making a few new dents in something old still getting kicked around like Turing’s test that sort of thing gets completely vaporized. Nothing being left to it at all, is even better!

  566. gpuccio @ 543

    Consciousness is unitary, because the I which perceives is always the same subject.

    The things it perceives vary a lot, but it is the same subject who perceives them.

    Don’t you think that’s pretty vague for computation purpose ? I got what is meant by Unitary from paper cited by fifthmonarchyman. Unitarity of consciousness is not proven, so we can’t conclude that it is not computable.

  567. Gpuccio:

    Thank you to you too. I will try to comment on what you say tomorrow.

    I’m looking forward to it. You seem to have the basics of my approach to the problem, where I focus on modeling rudimentary intelligence resulting in numbers that have the signature of intelligence in there, somewhere. But figuring out what to look for is such a task in itself I’m best off just explaining that part to those here who are already working on it for genetic code (that only gets far more complex than what the simple model ends up with in memory) and even English language where transmission is by muscle control of air flow, and body movements where in the game of Chrades only body language is allowed to communicate words.

    I can also add this video that abstractly illustrates what happens when sounds are temporally stored then decoded to different notes from musical instruments recalled when heard, which in our mind play along with it like this:

    Animusic HD – Fiber Bundles (1080p)
    https://www.youtube.com/watch?v=M6r4pAqOBDY

    Human language decodes to sounds that sometimes resemble what something makes. Meaning can change just by the way it’s said. Showing the complexity of all that is a giant task. Also gets into sounds having waveshape and motion through 3D stereophonic space that also conveys information that paints a picture, as this video helps show:

    “Harmonic Voltage” – Animusic.com
    https://www.youtube.com/watch?v=rGCTLJDoMGw

    Some sounds like squeaky chalk send unpleasant “chills down our spine” while others in right combination are soothing, exciting, refreshing, etc.. The premise of the theory of intelligent design sends chills down the spine, of some. While to others being properly word for word stated is like music to our ears.

    We consciously feel sound, and words can hurt. All this only further adds to the complexity of Human language. So it’s good to see others at least trying to make better sense of it all. And some being religiously motivated is fine by me, though it makes some others nervous.

  568. I’m sorry I missed most of this conversation.

    gpuccio, you mentioned on the other thread that you had explained elsewhere why you feel that P(T|H) can be omitted from your version of the CSI calculations. Would you mind linking to that explanation?

  569. DNA_Jock:

    For the moment, I am not (yet) discussing here the validity of the dFSCI computation in ATP synthase to infer design. I want to discuss before the methodological aspect, and that I will do later in the day.

    I am just asking if you agree with a simple fact: that my “cherry-picking” of three distant sequences and my shortcut of referring only to absolutely conserved positions in them was not so unreasonable and, as I expected, had given a serious underestimation of the complexity as estimated by the Durston method. Do you agree with that?

    As an interesting aside, I have checked another aspect of the results of the multiple alignment.

    167 positions have a conservation, in the whole group of sequences, higher than 90%. That is very near to the number of absolutely conserved positions in my three distant sequences (176).

    IOWs, by my “cherry-picking” I have essentially caught those positions which retained a very high conservation (more than 90%) when ascertained in the whole set of sequences. Not a bad result, I would say, for a quick approximation.

    And, as I expected, the overestimation of the complexity in the “ultraconserved” positions is vastly compensated by the underestimation of the complexity of all the other positions (which, in my shortcut, was set to 0), with a global “loss” of functional complexity of about 719 bits in the first evaluation.

    So, this is just an answer to your previous comment:

    “Thank you REC @ 209 for running the alignment on a decent number of ATPases. 12 residues are 98% conserved. I suspect he might have been better off going with his Histone H3 example, but H3 doesn’t look complicated. Re your reply to him at 232. I won’t speak for REC, but I am happy to stipulate that extant, traditional ATP synthase is fairly highly constrained; you could return the favor by recognizing that this constraint informs us about the region immediately surrounding the local optimum, nothing more. Rather , I think the problem is with your cherry-picking of 3 sequences out of 23,949 for your alignment, which smacks of carelessness. Why not use the full data set?”

    Well, now I have done exactly that.

  570. Learned Hand:

    I never refer to Dembski’s last paper on specification. I don’t criticize it: I simply don’t understand it, so I can neither refer to it for my reasoning, nor criticize it.

    That’s why I never use the P(T|H) formalism in my reasoning.

    Keith has kindly reposted a brief summary of my empirical procedure in post #15 of this thread. You can refer to that, as a first approach. There is no P(T|H) formalism in it. I am ready to discuss any part of my approach as I have detailed it.

  571. Learned Hand::

    Maybe this simple comment can help:

    I define specification as nay rule which generates a binary partition in the search space (target space vs non target).

    I refer only to the probability of finding the target space by a random search.

    I support the validity of the procedure empirically, as shown by its absolute specificity as estimated by a 2×2 verification table in all possible tests, and not as a logical necessity.

  572. gpuccio,

    I refer only to the probability of finding the target space by a random search.

    And that is why the numerical value produced by your dFSCI computation is irrelevant. Evolution is not a purely random search.

    You take selection into account in the other part of dFSCI — the boolean part — but the way you do so is pitiful. It amounts to this:

    If gpuccio isn’t aware of how it could have evolved, then it must be designed.

    It sounds a lot better to say “it has 759 bits of dFSCI”, doesn’t it? Too bad that doesn’t mean anything useful.

  573. Keith S

    And that is why the numerical value produced by your dFSCI computation is irrelevant. Evolution is not a purely random search.

    So its not unguided? mmmmmmmmmmm yet unguided evolution is the best explanation for guided searches?

    You are in over your head here……

  574. Keith S

    If gpuccio isn’t aware of how it could have evolved, then it must be designed.

    And are you aware how it evolved? How did it just emerge? Can you give us some insight on this emergence mechanism that unguided evolution has?

  575. DNA_Jock:

    Errata corrige! In my post #570, the reference to percentiles is completely wrong! I must have rushed it in a moment of complete mental confusion. 🙂

    So, I am changing the phrase:

    “IOWs, by my “cherry-picking” I have caught approximately the 90% percentile of the general conservation as ascertained in the whole set of sequences. Not a bad result, I would say, for a quick approximation.”

    as follows:

    “IOWs, by my “cherry-picking” I have essentially caught those positions which retained a very high conservation (more than 90%) when ascertained in the whole set of sequences. Not a bad result, I would say, for a quick approximation.”

    The concept remains absolutely valid: it is a very good result. But I apologize for the error.

    This remains for the record of my sins!

  576. 577

    Gary S Galuin

    The eureka moment in this whole enterprise came when I realized that all Dembski was doing with CSI was looking for a better more objective Turing Test.

    That simple realization moved ID from interesting apologetics to a very practical straight forward scientific endeavor in my mind.

    Peace

    PS I might be asking for your coding assistance at some point 🙂

  577. 578