Uncommon Descent Serving The Intelligent Design Community

An attempt at computing dFSCI for English language

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In a recent post, I was challenged to offer examples of computation of dFSCI for a list of 4 objects for which I had inferred design.

One of the objects was a Shakespeare sonnet.

My answer was the following:

A Shakespeare sonnet. Alan’s comments about that are out of order. I don’t infer design because I know of Shakespeare, or because I am fascinated by the poetry (although I am). I infer design simply because this is a piece of language with perfect meaning in english (OK, ancient english).
Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in english is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB. As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.

In the discussion, I admitted however that I had not really computed the target space in this case:

The only point is that I have not a simple way to measure the target space for English language, so I have taken a shortcut by choosing a long enough sequence, so that I am well sure that the target space /search space ratio is above 500 bits. As I have clearly explained in my post #400.
For proteins, I have methods to approximate a lower threshold for the target space. For language I have never tried, because it is not my field, but I am sure it can be done. We need a linguist (Piotr, where are you?).
That’s why I have chosen and over-generous length. Am I wrong? Well, just offer a false positive.
For language, it is easy to show that the functional complexity is bound to increase with the length of the sequence. That is IMO true also for proteins, but it is less intuitive.

That remains true. But I have reflected, and I thought that perhaps, even if I am not a linguist and not even a amthematician, I could try to define better quantitatively the target space in this case, or at least to find a reasonable higher threshold for it.

So, here is the result of my reasonings. Again, I am neither a linguist nor a mathematician, and I will happy to consider any comment, criticism or suggestion. If I have made errors in my computations, I am ready to apologize.

Let’s start from my functional definition: any text of 600 characters which has good meaning in English.

The search space for a random search where every character has the same probability, assuming an alphabet of 30 characters (letters, space, elementary punctuation) gives easily a search space of 30^600, that is 2^2944. IOWs 2944 bits.

OK.

Now, I make the following assumptions (more or less derived from a quick Internet search:

a) There are about 200,000 words in English

b) The average length of an English word is 5 characters.

I also make the easy assumption that a text which has good meaning in English is made of English words.

For a 600 character text, we can therefore assume an average number of words of 120 (600/5).

Now, we compute the possible combinations (with repetition) of 120 words from a pool of 200000. The result, if I am right, is: 2^1453. IOWs 1453 bits.

Now, obviously each of these combinations can have n! permutations, therefore each of them has 120! different permutation, that is 2^660. IOWs 660 bits.

So, multiplying the total number of word combinations with repetitions by the total number of permutations for each combination, we have:

2^1453 * 2^660 = 2^2113

IOWs, 2113 bits.

What is this number? It is the total number of sequences of 120 words that we can derive from a pool of 200000 English words. Or at least, a good approximation of that number.

It’s a big number.

Now, the important concept: in that number are certainly included all the sequences of 600 characters which have good meaning in English. Indeed, it is difficult to imagine sequences that have good meaning in English and are not made of correct English words.

And the important question: how many of those sequences have good meaning in English? I have no idea. But anyone will agree that it must be only a small subset.

So, I believe that we can say that 2^2113 is a higher threshold for out target space of sequences of 600 characters which have a good meaning in English. And, certainly, a very generous higher threshold.

Well, if we take that number as a measure of our target space, what is the functional information in a sequence of 600 characters which has good meaning in English?

It’s easy: the ratio between target space and search space:

2^2113 / 2^ 2944 = 2^-831. IOWs, taking -log2, 831 bits of functional information. (Thank you to drc466 for the kind correction here)

So, if we consider as a measure of our functional space a number which is certainly an extremely overestimated higher threshold for the real value, still our dFSI is over 800 bits.

Let’s go back to my initial statement:

Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in english is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB. As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.

Was I wrong? You decide.

By the way, another important result is that if I make the same computation for a 300 character string, the dFSI value is 416 bits. That is a very clear demonstration that, in language, dFSI is bound to increase with the length of the string.

Comments
gpuccio, What do you think of this? I know you understand this much better than I do: https://uncommondesc.wpengine.com/evolution/a-third-way-of-evolution/#comment-534047 Thank you. Dionisio
ZAc said, Tests of Bell’s Inequality indicate there are no local hidden variables. I say, Bell's Inequality is definitely above my pay grade. All I can do is point out that I'm not talking about local hidden variables if I understand them correctly I'm talking about a variable that is instantaneous and universal in scope. I think that would flow necessarily if the entire universe is the computer. Oh well all of this is just an interesting rabbit trail. QM could be truly random or apparently random and it would not change much as far as science goes. IMHO peace fifthmonarchyman
Zachriel:
Me_Think: If you mean evolution of encephalization, say so. Not sure, but Gary S. Gaulin may be referring to intercellular communication networks.
I am referring to neural brain produced "intelligence" as operationally defined by the systematics of a computer model of intelligence, which is explained in its theory of operation, the accompanying Theory of Intelligent Design. Cellular communication networks are a product of cellular intelligence. Not all cellular communication networks are intelligent at the multicellular level. Prior to the very beginning of the "Cambrian Explosion" there would have been no multicellular intelligence, but cellular intelligence was already very well established by then. Gary S. Gaulin
fifthmonarchyman: the key word is appear. Tests of Bell's Inequality indicate there are no local hidden variables. Zachriel
Me_Think:
I have skimmed through his 51 pages........... There is nothing there about evolution of intelligence.
You're lost without your fuzzy crutch word (evolution)? Are you just another scientific disgrace trying as hard as you can to turn science into a scientifically lazy follow the crowd dictatorship where all is explained by using insulting generalizations that cannot even explain how intelligence or intelligent cause works? Gary S. Gaulin
zac says On the other hand, quantum effects do appear to be truly random. I say, the key word is appear. Any string with out discernible patterns will appear random until and unless you can specify an algorithm that will reproduce it. It's possible that when it comes to quantum effects the algorithm is run on the universe itself. If that is the case quantum effects will always appear random to an observer inside the universe/computer peace fifthmonarchyman
Zachriel @ 921 I have skimmed through his 51 pages disjointed Theory of Intelligent Design . He touches on Multicellular and Human Multicellular Intelligence briefly on page 33 and 34. There is nothing there about evolution of intelligence. I think he just wants to highlight his VB6 program about what he calls Intelligence Design Lab critter. Most pages are bizarre. Eg: He runs read write operation and graphs the memory usage, says that is foraging, and claims that somehow represents intelligence evolution through Cambrian explosion!!:
The familiar lines seen here are predicted to be representative of the development of multicellular intelligence just prior to and through the Cambrian Explosion.
Me_Think
Me_Think: If you mean evolution of encephalization, say so. Not sure, but Gary S. Gaulin may be referring to intercellular communication networks. Zachriel
fifthmonarchyman: 1) True randomness does not exist and apparent randomness is merely a statement about the ignorance of the observer Randomness entails a couple of different concepts. In science, it generally means variables are uncorrelated. So we might say mutation is random with respect to fitness. That doesn't suggest that mutation doesn't have a cause, just that you can't predict one from the other. Algorithms can't produced truly random numbers, and all algorithmic random number generators eventually repeat. This is fairly obvious when you realize that there are a finite number of states for a finite digital computer, so if it runs continuously, it eventually has to return to a previous state. On the other hand, quantum effects do appear to be truly random. Zachriel
Me_Think and gpuccio Hey Guys, quick recap Ive now intuited that "random" equals an undetermined irrational constant corresponding to the algorithm that can reproduce the string. I can now plug your strings into my game and I believe something amazing will happen. I haven't done this yet but my hypothesis is that with gpuccio's string I will be able to fool the observer for a very long time with any irrational constant that I choose. Only when the observer discovers precisely which irrational constant was used to produce the original string will he cease to be fooled I further hypothesize That an observer will be able to quickly pick out any of Me_Think's strings in which the characters are arranged "randomly". In this way my game can act as a "randomness" detector. I'm telling you this is cool stuff peace fifthmonarchyman
Gary S. Gaulin @ 917,
... (click on my name for link) or go away.
You are a nobody . People don't have to spend time reading your theories to understand ID. If you have to say something about your theory which no body knows about, ask for an OP, write a book, write a paper and publish in ID journals. Me_Think
If you expected someone to discuss your model of evolution of intelligence as you understand it based on your theory , I can’t help.
Then you just admitted that you do not belong in a forum for discussing ID theory or cognitive science. You should now do the honorable thing and either start studying what you missed (click on my name for link) or go away. Gary S. Gaulin
Gary S. Gaulin @ 913
- I thought you are ancient in using kiddy drag and drop VB6 for your coding... -You sure like to pride yourself, for not even knowing what the kids are into these days.
You got me there !So I modify my comment thus: I thought you are ancient in using ancient drag and drop VB6 for your coding... Me_Think
Gary S. Gaulin @ 914, If you expected someone to discuss your model of evolution of intelligence as you understand it based on your theory , I can't help. Me_Think
Me_Think:
Read the damn NASA paper @ 910 if you want to know how encephalization is linked to evolution of intelligence.
And you sure are lost. But at this point it's best that I don't bother wasting more time on your excuses for not knowing what you're talking about (in regards to cognitive science). I had enough of your "evolution of intelligence" generalizations that cannot explain how either intelligence or intelligent cause works. Just more hand-waving with big-words to make you look-smart. Gary S. Gaulin
I thought you are ancient in using kiddy drag and drop VB6 for your coding,
You sure like to pride yourself, for not even knowing what the kids are into these days. Gary S. Gaulin
Gary S. Gaulin @ 911
How many times must I clearly say “multicellular intelligence” before you or anyone else defending Darwinian theory seriously addresses the origin of “multicellular intelligence”?
You think we are talking about Encephalization of single cellular organism ?- Is it even possible for a single cellular organism ? Are you that daft ? Read the damn NASA paper @ 910 if you want to know how encephalization is linked to evolution of intelligence. Me_Think
Wow! For someone peddling his own theories, you are remarkably uninformed. Evolution of Encephalization is evolution of intelligence !! You think Brain’s function is purifying blood ? Here is a premier from NASA
How many times must I clearly say "multicellular intelligence" before you or anyone else defending Darwinian theory seriously addresses the origin of "multicellular intelligence"? Where is your model (I have one) to show what to look for in the fossil evidence? Gary S. Gaulin
Gary S. Gaulin @ 909 Wow! For someone peddling his own theories, you are remarkably uninformed. Evolution of Encephalization is evolution of intelligence !! Here is a premier from NASA I thought you are ancient in using kiddy drag and drop VB6 for your coding, but find you are ancient in your thinking too. Me_Think
And:
If you mean evolution of encephalization, say so.
Wow! You sure were desperate for a big-word, to make you look-smart.
Encephalization is defined as the amount of brain mass related to an animal's total body mass. http://en.wikipedia.org/wiki/Encephalization
LOL!!! Gary S. Gaulin
Stop promoting your own concepts (your origin of intelligence blog is not standard scientific literature for us to study and understand).
I forgot that you need science magazines to keep you informed in what's going on in all of science, and to do all your thinking for you. You have to somehow keep the groupthink going, right? What better way than to dismiss what your magazines said they will not allow to be published anyway? A perfect plan to censor science!!!!!! Gary S. Gaulin
Gary S. Gaulin @ 906 to zac
... proliferation of multicellular intelligence is one of the very.... Your attention to scientific detail is at least consistent with what is expected from a clueless political hack.
Stop promoting your own concepts (your origin of intelligence blog is not standard scientific literature for us to study and understand). If you mean evolution of encephalization, say so. You can't expect us to know your non-standard concepts. Me_Think
And Zachriel, in regards to this brush-off:
You do realize there is ample evidence of life before the Cambrian Explosion, including multicellular life?
I stated quote:
Gary S. Gaulin: Not having beforehand predicted a sudden proliferation of multicellular intelligence is one of the very serious weaknesses of Darwinian theory.
Your attention to scientific detail is at least consistent with what is expected from a clueless political hack. Gary S. Gaulin
Zachriel:
As for your link, the problem with pseudo-science is that, because it’s not constrained by observation, it fractures into as many pieces as there are advocates (adaptive radiation).
Or in other words Zachriel is another political hack who does not even bother to study what they claim to understand. But go ahead and look at all this "pseudo-science" everyone! http://www.planetsourcecode.com/vb/scripts/BrowseCategoryOrSearchResults.asp?txtCriteria=Gary+Gaulin&lngWId=1 Gary S. Gaulin
I think I need to modify my specification A undetermined Irrational constant corresponding to the algorithm that produced it. needs to be something like A undetermined Irrational constant corresponding to an algorithm that can reproduce it. Peace fifthmonarchyman
gpuccio, Well, I approached your challenge as if I was the observer playing my game. I asked myself what I can know about your string. First of all since it is finite I know there is an algorithm that corresponds to it. Next I intuit that the digits of the string don't repeat and the program doesn't halt (via Chaitin's constant). Thus I can say that the the output of the algorithm is an irrational constant. I want to say this is a tentative observation on my part I have a lot to learn about the halting problem I'm not even sure I can speak intelligently about it right now. A couple of implications that flow from this 1) True randomness does not exist and apparent randomness is merely a statement about the ignorance of the observer 2) Even the first rung on the Y-axes in not attainable by algorithmic means Does any of that make sense? Peace fifthmonarchyman
#900 fifthmonarchyman You have written the post # 900 in this thread!!! Will this thread reach the 1000 posts mark? This thread, which started with an insightful OP by gpuccio, has almost twice as many posted comments as the most popular OP thread in this site: https://uncommondesc.wpengine.com/intelligent-design/a-world-famous-chemist-tells-the-truth-theres-no-scientist-alive-today-who-understands-macroevolution/ Ok, these stats are not that important at the end of the day, but it's kind of interesting to see how discussions can turn intensive and extensive. :) Dionisio
fifthmonarchyman: Yes, could you please elaborate? I am certainly interested, but I would like to understand well your point. gpuccio
gpuccio I hope you are still checking here from time to time I've been thinking hard about the mirror image relationship between your challenge and my "game" and I think I have a specification for your string A undetermined Irrational constant corresponding to the algorithm that produced it. I can elaborate if needed. This specification is the lowest level of the Y-axes! Besides uniting our two approaches it provides a way to define "random" that even rock ribbed Calvinists like me can get behind This has been one fun and productive vacation! peace fifthmonarchyman
fifthmonarchyman:
Do you believe that finite strings are computable even if their specification is unknown? If so we are at a metaphysical impasse.
Computability is not a metaphysical concept. It's a well-defined automata theoretic concept. A string is computable iff it can be produced by a Turing-equivalent system. Every finite string can be produced by a Turing-equivalent system, so every finite string is computable. This isn't a matter of opinion or metaphysics. R0bb
Zac said, That’s actually a slightly different question, which has to do with choosing an algorithm which computes the string. While such an algorithm exists, you won’t be able to tell which one. I say, Thank you God!!! some common ground You say That’s true of a Shakespearean sonnet, but it’s also true of a random sequence, or any string for that matter. I say. Agreed. see my response to gpuccio below fifthmonarchyman
Gary S. Gaulin writes:
This is the only known Theory of Intelligent Design that provides scientifically testable predictions and models to explain the origin of intelligence and how intelligent cause works.
What is the purpose of the word "known" in the above sentence? Are there "unknown" theories of Intelligent Design that provide scientifically testable predictions? If so, how would we know? Are ID proponents impressed with Mr. Gaulin's 40 pages of somewhat difficult prose and his impressive claim that his theory has explanatory power that produces testable entailments? Some ID proponent should be fetching this man a chair. Mr Arrington! Get Mr. Gaulin to write an OP. It's what we've been waiting for! Alicia Renard
Gary S. Gaulin @ 894
Your childish answers indicate that you still don’t even know what is explained by the Theory of Intelligent Design.
Are you a famous IDer who has published papers and books and expounded your theories across the world ? When even a Math and Philosophy PhD guy is struggling to make his work recognized, why do you think your own theory and schematics in your own blog will be known, much less read and understood by anyone at all ? Me_Think
Gary S. Gaulin: Your ... answers indicate that you still don’t even know what is explained by the Theory of Intelligent Design. We were responding to your comment that we were somehow "arguing that this planet suddenly appeared, at the very start of the Cambrian Explosion." That's obviously not the case. The Earth formed long before the Cambrian Explosion, and life appeared soon after that. As for your link, the problem with pseudo-science is that, because it's not constrained by observation, it fractures into as many pieces as there are advocates (adaptive radiation). Zachriel
Gary S. Gaulin: But good luck arguing that this planet suddenly appeared, at the very start of the Cambrian Explosion.
You do realize there is ample evidence of life before the Cambrian Explosion, including multicellular life?
Your childish answers indicate that you still don't even know what is explained by the Theory of Intelligent Design. Gary S. Gaulin
fifthmonarchy: Do you believe that finite strings are computable even if their specification is unknown? Glad you gave up on Kolmogorov Complexity. All finite strings are computable, and there are an infinite number of algorithms which can compute each one. fifthmonarchy: You can’t compute a string if you don’t already know it’s specification. That's actually a slightly different question, which has to do with choosing an algorithm which computes the string. While such an algorithm exists, you won't be able to tell which one. That's true of a Shakespearean sonnet, but it's also true of a random sequence, or any string for that matter. Zachriel
Zac let's cut to the chase Do you believe that finite strings are computable even if their specification is unknown? If so we are at a metaphysical impasse. You can't compute a string if you don't already know it's specification. This is a to me a self evident truth. If you deny this obvious truth discussion is futile as far as I can tell and the only way that I can see forward is science. fifthmonarchyman
fifthmonarchyman: I ask again are you claiming that ALL finite strings are computable? Yes. fifthmonarchyman: If you say yes you have ruled out ID a-priori and there is really no reason to discuss further. It has nothing to do with ID. It's a simple fact about a specific measure called Kolmogorov complexity. fifthmonarchyman: If you somehow think that that program has anything to do with my argument then There is no point in discussing this further with you, You're the one who introduced Kolmogorov complexity without apparently understanding it. fifthmonarchyman: I know you realize that all universal Turning machines can be considered equivalent so increased technical ability will not make the challenge any easier Your claim is that such an algorithm is impossible, but that's what you have yet to show. Gary S. Gaulin: That’s another brush-off. When a new niche becomes available, there are a multitude of opportunities for adaptation, hence we will usually see a spurt of variation, followed by a winnowing process. That doesn't resolve the specifics of the Cambrian Explosion, but provides other examples of the general pattern. Gary S. Gaulin: But good luck arguing that this planet suddenly appeared, at the very start of the Cambrian Explosion. You do realize there is ample evidence of life before the Cambrian Explosion, including multicellular life? Zachriel
Gary S. Gaulin: And what did you explain by spouting a smart sounding name for something?
Adaptive radiation occurs when a new niche becomes available. The Cambrian Explosion is a case of adaptive radiation on a large scale.
That's another brush-off. But good luck arguing that this planet suddenly appeared, at the very start of the Cambrian Explosion. Gary S. Gaulin
FMM, You wrote:
since the computability resources needed to specify an IC object is infinite the Kolmogorov complexity of said object is infinite by definition.
That claim is incorrect, and I have explained why. The simple program I cited proves it, by the very definition of Kolmogorov complexity. To knowledgeable observers, you look very foolish: a guy who doesn't know what he's talking about, lashing out at the people who are trying to explain it to him. Why not crack a book or two? Does everything have to be spoon-fed to you by your critics? Show some initiative and learn about irreducible complexity, computability and Kolmogorov complexity. They're interesting topics! keith s
I know you realize that all universal Turning machines can be considered equivalent
Not really.Every universal Turning machines has different states. For Eg. The smallest universal Turning machines put forth by Yurii Rogozhin has state and color set as (24, 2), (10, 3), (7, 4), (5, 5), (4, 6), (3, 10), and (2, 18). Wolfram in 2002 discovered the Four 2-state 4-color universal Turning machines. Me_Think
Zac said, Not being able to produce such an algorithm may mean nothing more than a lack of technical ability. I say, I know you realize that all universal Turning machines can be considered equivalent so increased technical ability will not make the challenge any easier peace fifthmonarchyman
Keiths said Any finite string can be produced by a program like the one I gave you above: I say NO offense but I'm not in the mood to play games. If you somehow think that that program has anything to do with my argument then There is no point in discussing this further with you, There are other critics who while not agreeing with me at least have a clue of what I'm talking about. Why don't you go back to your ONH argument. It doesn't require you to do much actual discussion. peace fifthmonarchyman
Any finite string can be produced by a program like the one I gave you above:
What about producing an irreducibly complex structure like a bacterial flagellum? Joe
The problem with your original post is the claim that the scarcity of the targets is a problem for evolutionary search.
Evolutionary search is an oxymoron wrt biology and the mainstream version of evolution. Joe
FMM:
I make no claims to being particularly well read it’s possible I’ve misunderstood something
That's fine, but why not work on remedying that? Learn what irreducible complexity, computability, and Kolmogorov complexity actually are, and then come back and make your argument. Any finite string can be produced by a program like the one I gave you above:
string = “<insert specified string here>”; output(string);
That is a finite program. Therefore, the Kolmogorov complexity of any finite string is finite. keith s
KeithS I make no claims to being particularly well read it's possible I've misunderstood something but Ive seen nothing Ive read here or elsewhere to contradict my understanding can you provide a link? you say. The Kolmogorov complexity of any finite string is finite. I say, Kolmogorov complexity is a measure of the computability resources needed to specify an object. I ask again are you claiming that ALL finite strings are computable? This is a simple straightforward question please answer yes or no. If you say yes you have ruled out ID a-priori and there is really no reason to discuss further. If you say no then our disagreement is semantics, You are apparently saying that a string can be not computable yet it's computation only requires finite resources. And I say that such a statement is illogical and incoherent peace fifthmonarchyman
fifthmonarchyman, The Kolmogorov complexity of any finite string is finite. This is a basic and well-known fact about Kolmogorov complexity. Why not crack a book now and then? keith s
fifthmonarchyman: Agreed, if you think my experiment is not well-devised please provide constructive criticism. The problem is that your experiment merely tests human technical limitations. Not sure if there is any way to salvage it. fifthmonarchyman: an Algorithm that infallibly fools the observer would be a very specific observation would it not? If you could find an algorithm that produced quality poetry, it would show that poetry is not beyond the capabilities of an algorithm. It used to be that chess was the ultimate test of human intelligence. Not finding such an algorithm is simply not significant evidence of anything other than the limitations of human capabilities. fifthmonarchyman: Again we are not trying to support my hypothesis we are trying to falsify it. Your hypothesis is that humans can't make such an algorithm, not that such an algorithm isn't possible. fifthmonarchyman: Are you implying that only positive prediction is valid in scientific inquiry? In the case of the bending of sunlight, a negative result would have been just as decisive. What a good hypothesis does is cleave the possible world into two, with a very distinct boundary. Your experiment will almost certainly fail, showing only the limitations of the experimenter. gpuccio: What a pity that a suitable oracle is usually provided only by intelligent engineering, and that highly connected spaces are so hard to find, especially when the search space becomes really huge The oracle could actually be the physical environment, as with some newer robot programs. Huge spaces are not a problem as long as they exhibit locality. Gary S. Gaulin: And what did you explain by spouting a smart sounding name for something? Adaptive radiation occurs when a new niche becomes available. The Cambrian Explosion is a case of adaptive radiation on a large scale. Zachriel
And Zachriel, most science defenders use the phrase "punctuated equilibrium" to get out of having to give an appropriate scientific answer to that one. Gary S. Gaulin
Gary S. Gaulin: Not having beforehand predicted a sudden proliferation of multicellular intelligence is one of the very serious weaknesses of Darwinian theory.
It’s called adaptive radiation.
And what did you explain by spouting a smart sounding name for something? The only thing I see in your statement is a brush-off of what I said. Gary S. Gaulin
Zachriel: Thank you for your last comments to my statements. I find them rather balanced. I agree that the fundamental role of conscious representations in cognition is still an open problem. There are, IMO, many arguments in favor of its essential role (including Godel derived arguments, and some basic intuitions about cognition itself). And I would say that there is no evidence that the opposite view, let's call it strong AI theory, has any rationale or any empirical support. But I agree, it is an open problem, and IMO a very important one for the whole scientific paradigm. I have definitely much less faith than you have in evolutionary searches. Sometimes I am surprised of how much faith, IMO unsupported, my "skeptical" interlocutors can harbor for things that help maintain their worldview. :) However, I have no problems in admitting that an "evolutionary search" can work rather well, given a suitable oracle and a highly connected functional space. What a pity that a suitable oracle is usually provided only by intelligent engineering, and that highly connected spaces are so hard to find, especially when the search space becomes really huge... :) gpuccio
zac says, No, but you can support a hypothesis with a well-devised experiment. I say, Agreed, if you think my experiment is not well-devised please provide constructive criticism. I'm open to any modifications what so ever as long as the "key" remains concealed from the programer. You say, A good prediction will have a very specific observation that if found to be false will contradict the hypothesis. I say, an Algorithm that infallibly fools the observer would be a very specific observation would it not? You say, You are making a negative prediction. NOT producing an algorithm does nothing to support your hypothesis. I say, Again we are not trying to support my hypothesis we are trying to falsify it. The failure to falsify provides a sort of indirect support I suppose but not proof by any means. You say, Consider a famous example, such as the degree of curvature of light around the Sun predicted by General Relativity. You take the observation. If it fails, then the theory is falsified. I say, Are you implying that only positive prediction is valid in scientific inquiry? Don't tell that to my boss ;-) I use negative prediction all the time in my work. It's often not as desirable as a positive prediction but lots of real world knowledge is built upon it. A negative result in a cancer screening for example can provide valuable information as can a negative result in a e coli test in a mountain stream . peace fifthmonarchyman
fifthmonarchyman: Not being able to produce an algorithm does not “prove” my Hypothesis Science can’t “prove” anything. No, but you can support a hypothesis with a well-devised experiment. fifthmonarchyman: Producing an algorithm falsifies my Hypothesis A good prediction will have a very specific observation that if found to be false will contradict the hypothesis. You are making a negative prediction. NOT producing an algorithm does nothing to support your hypothesis. Consider a famous example, such as the degree of curvature of light around the Sun predicted by General Relativity. You take the observation. If it fails, then the theory is falsified. Zachriel
Zac says. Add the scientific method to things you don’t understand. Not being able to produce such an algorithm may mean nothing more than a lack of technical ability. I say, Not being able to produce an algorithm does not "prove" my Hypothesis Science can't "prove" anything. Producing an algorithm falsifies my Hypothesis Falsifiability is the classic demarcation between science and non-science How is that a misunderstanding of the scientific method? peace fifthmonarchyman
gpuccio: I would simply say that such an algorithm cannot exist, and that a conscious agent who understands meaning and has complex conscious representations is necessary to do that. Z: That’s your opinion, but not something you’ve shown. That’s a good description of the problem, though. The problem with your original post is the claim that the scarcity of the targets is a problem for evolutionary search. It's simply not. As long as there are selectable pathways, evolution will find them. And we can show this by creating a landscape of meaningful phrases, even if only a tiny subset of what you point out is already scarce, to see if an evolutionary algorithm can navigate the landscape. This doesn't solve the problem of meaning, though. Zachriel
fifthmonarchman: since a Shakespearean sonnet is by definition a sonnet composed by Shakespeare the “overhead required” must specify all that is Shakespeare. Clearly that is a lot of information No. The longest possible shortest description is the string itself. The overhead is just what the program requires to call the identity function. Gary S. Gaulin: Instead of the discovery of (what later became known as) the Cambrian Explosion having been predicted by Charles Darwin ... Darwin was aware of the Cambrian Explosion. Gary S. Gaulin: Not having beforehand predicted a sudden proliferation of multicellular intelligence is one of the very serious weaknesses of Darwinian theory. It's called adaptive radiation. fifthmonarchyman: Shakespeare is important because we are using Shakespearean sonnets as a typical test case to illustrate what Irreducible complexity is and how the game works. Note to Me_Think: he's using an non-standard definition of irreducible complexity, as well as information, computable, Kolmogorov Complexity, and the scientific method. gpuccio: but are you saying that we can output a string by a simple program if we already know it? Yes, that is the longest shortest program which can output a given string in Kolmogorov Complexity. gpuccio: I am not an expert in Kolmogorov complexity, but is it possible that the real utility of it is to know if we can compute a string which we don’t know in advance by an algorithm simpler than the string itself, and not if we can output a string which is already in the algorithm? Think in terms of compression. What is the shortest possible representation of the string. It can be shorter than the original string, but can't be longer (other than calling the function). gpuccio: Now, even if the term “Kolmogorov complexity” is perhaps describing both cases, I think that we are dealing with two different concepts here. It's fifthmonarchyman's confusion. gpuccio: The interesting point is: how big must an algorithm be to compute a Shakespeare sonnet (or something equivalent) without previously knowing it? Don't know. How big was Shakespeare's mind when he wrote them? Any algorithmic solution is going to have to have access to all the very same information Shakespeare did, spelling and grammar, rhyme and rhythm, the history of England, tales told by countless others that he had heard, and wisecracks he heard at the local pub. gpuccio: Maybe fifthmonarchyman’s point is that such an algorithm would have infinite complexity. That seems to be his claim. He would do better not to redefine terminology. It confuses his readers, and leaves his own thinking muddled. gpuccio: I would simply say that such an algorithm cannot exist, and that a conscious agent who understands meaning and has complex conscious representations is necessary to do that. That's your opinion, but not something you've shown. That's a good description of the problem, though. gpuccio: How complex should an algorithm be to recognize all possible contexts of that kind? Very complex no doubt, and quite possibly non-computable, but that just isn't something you've shown. fifthmonarchyman: Prove that you can produce an algorithm that will fool an observer infallibly with out cheating when it comes to IC configurations and my claim that they are non-computable will be falsified. If you can’t do that my hypothesis stands Add the scientific method to things you don't understand. Not being able to produce such an algorithm may mean nothing more than a lack of technical ability. Zachriel
Gpuccio @ 863, Your text-detector algorithm works for texts in languages known to the detector. Congratulations. Not that I doubted this for a minute, of course. However, you did mention (@728) the specification “having good meaning in any pre-existing language that we may discover in the future, on other planets, everywhere”. Now THAT would actually be useful, telling us something that we didn’t already know. THAT would be analogous to a protein-design-detector. Presumably, it would also be able to detect encrypted messages. Could you provide a working example of this design-detector? The NSA would be very interested in a steganography detector. My responses at 720 & 745 stand. DNA_Jock
gpuccio said, So, has the algorithm computed Shakespeare’s sonnet? The answer is: no. The algorithm has computed the whole list of sequences made of English words from a list of all English words. I say exactly, I think it is important to understand the concept of the Y-axes here. The meaning of the sonnet goes far beyond the words themselves. The x-axes is indeed finite but the y-axes is infinite. peace fifthmonarchyman
Everyone, I am more than willing to share my Game with anyone If they wanted to give a crack at converting it from excel to a shareable app. just let me know how to contact you peace fifthmonarchyman
KeithS said, Yes, but I’m saying something more as well: the fact that we can always write such a program places a firm upper bound on the Kolmogorov complexity of any finite string. I say, You can "always write such a program" to produce a non-computable string ???? what? Can you not see the blatant contradiction? or are you claiming that every finite string is computable? If that is your claim then we are at a metaphysical impasse. The only way forward that I can see is to do science!! Prove that you can produce an algorithm that will fool an observer infallibly with out cheating when it comes to IC configurations and my claim that they are non-computable will be falsified. If you can't do that my hypothesis stands peace fifthmonarchyman
gpuccio:
Sorry to intrude...
No need to apologize.
...but are you saying that we can output a string by a simple program if we already know it? :)
Yes, but I'm saying something more as well: the fact that we can always write such a program places a firm upper bound on the Kolmogorov complexity of any finite string. So FMM's claim is clearly wrong:
Kolmogorov_complexity of an object, such as a piece of text, is a measure of the computability resources needed to specify the object. since the computability resources needed to specify an IC object is infinite the Kolmogorov complexity of said object is infinite by definition.
The Kolmogorov complexity of a finite string is finite. Irreducible complexity has nothing to do with it. keith s
keith s and fifthmonarchyman: Just another thought. From my OP, and considering the English language as made of 500000 words, we can see that there are only about 2^2271 sequences of 600 characters made of correct English words. This is a finite number, although a very big one. About 10^684. So, an algorithm which includes a list of those 500000 words can, in a time which I will not try to compute, but finite anyway, output the whole list of all possible sequences of 600 characters made of English words. As said, the sequences which have good meaning in English will be among them. As said, all possible sonnets of that length, including my favorite from Shakespeare's, "Why is my verse so barren of new pride", would be there. So, has the algorithm computed Shakespeare's sonnet? The answer is: no. The algorithm has computed the whole list of sequences made of English words from a list of all English words. Which is a perfectly possible computational task, a rather simple one too, even is a very long task indeed. What the algorithm can output is a very long list. But in no way it can output a list of all the sequences which have meaning. Because it has no idea of what meaning is. Let's make an example. The first verse is: "Why is my verse so barren of new pride" Now, a possible similar sequence would be: "Why is one table more bestead than its fear" which has no detectable meaning, although it is syntactically correct. In particular, in the original verse, the use of the adjective "barren" for the verse is specially beautiful and evokes many complex and meaningful connotations, exactly because it is not an adjective that we would normally use in that context. We, as conscious observers, can easily understand that. Our conscious representation, evoked by the words, is immediately rich and deep. So, how could an algorithm understand that the original verse is meaningful and beautiful, while the second sequence is simply meaningless? How complex should an algorithm be to recognize all possible contexts of that kind? gpuccio
keith s: Sorry to intrude, but are you saying that we can output a string by a simple program if we already know it? :) Yes, that's what you are saying. Maybe my post #863 about not using the specific bits of a sequence to specify it could have some relationship to this discussion? I am not an expert in Kolmogorov complexity, but is it possible that the real utility of it is to know if we can compute a string which we don't know in advance by an algorithm simpler than the string itself, and not if we can output a string which is already in the algorithm? The second fact (your example) seems really trivial, and it does not seem to have anything to do with computing the string. Now, even if the term "Kolmogorov complexity" is perhaps describing both cases, I think that we are dealing with two different concepts here. The interesting point is: how big must an algorithm be to compute a Shakespeare sonnet (or something equivalent) without previously knowing it? Maybe fifthmonarchyman's point is that such an algorithm would have infinite complexity. I would simply say that such an algorithm cannot exist, and that a conscious agent who understands meaning and has complex conscious representations is necessary to do that. Again, I apologize for the intrusion. gpuccio
fifthmonarchyman:
since a Shakespearean sonnet is by definition a sonnet composed by Shakespeare the “overhead required” must specify all that is Shakespeare. Clearly that is a lot of information Since Shakespeare is a non computable function the value of C in this case is infinite this is not hard
True, it's not hard, but you're having a lot of trouble with it. C is not infinite. It isn't even large. The Kolmogorov complexity of a given string depends on the size of the program needed to produce that same given string on a specified machine. It has nothing to do with programming a Shakespeare emulator. Any specified finite string can be produced by a program that looks something like the following.
string = "<insert specified string here>"; output(string);
The program is longer than the string, obviously, but not much longer. C is a small number. keith s
DNA_Jock: My specifications. 3 for the Shakespeare sonnet: 1) A sequence of 600 characters made of English words 2) A sequence of 600 characters which has good meaning in English 3) A sequence of 600 characters which has good meaning in any known language and one for a generic enzyme: 4) Any protein which can accelerate reaction A at least x times (or more). Please, tell me where in any of those definitions am I mentioning a specific sequence with specific bits, or am I using, or showing that I am aware of, the specific bits (characters or AAs) of an observed sequence. Nowhere. Instead, let's look at your attempt: “After decryption with algorithm X, the string becomes a passage in “good English” that describes [insert arbitrarily narrow specification of the passage’s content here]” What is "algorithm X"? Did it exist before your observation of the sequence? Was it built using the specific bits of the sequence? If the answer is that it did not exist before the observation of the sequence abd that it can work only on that particular random sequence because it has been engineered for that sequence and for its specific bits after having observed them, then you are really committing a perfect TSS fallacy. Not I. If, on the other hand, you give a generic algorithm which can transform any random sequence in an English phrase, then your specification has no functional complexity, because any random sequence is in the target. Please, show your "algorithm X", ans we will see. You have met my challenge? Absolutely not! And you say: "If I am not allowed, then the analogy fails: the protein-specifier IS using the observed functionality to come up with the specification." You said it! The protein-specifier is using THE OBSERVED FUNCTIONALITY to come up with the specification. Not the AAs in the sequence. I have never said that you cannot use any functionality observed in my random string. You can. Absolutely. And again, mine is not an "analogy". It is an example of how the dFSCI procedure works and correctly detects design in language sequences. Do you admit that it works, or are you trying to meet my challenge to show how the application of the procedure is an example of TSS fallacy? gpuccio
me_Thinks asks I don’t know why you bring in Shakespeare here, I say, Shakespeare is important because we are using Shakespearean sonnets as a typical test case to illustrate what Irreducible complexity is and how the game works. If you need me to I can explain again how all this applies equally well to any IC configuration even those with no obvious designer like circles. let me know peace fifthmonarchyman
Zachriel:
Gary S. Gaulin: The contradiction is that new phyla and species suddenly emerged instead of the usual “undergone modification” that came afterward.
Not sure what you mean. Perhaps you could provide a specific example.
Instead of the discovery of (what later became known as) the Cambrian Explosion having been predicted by Charles Darwin and a relief that a major prediction of the theory was finally shown to be true, the discovery came as a surprise to scientists who were certainly not expecting exponential increase like this: https://sites.google.com/site/intelligenceprograms/Home/JoeMeertTimeline.jpg Not having beforehand predicted a sudden proliferation of multicellular intelligence is one of the very serious weaknesses of Darwinian theory. Gary S. Gaulin
Since Shakespeare is a non computable function the value of C in this case is infinite this is not hard
I don't know why you bring in Shakespeare here, unless you have to know who the designer was and his capabilities before you can decipher what ever string is given to you. Me_Think
Zac says In this case, it’s the length of the string plus whatever overhead the language requires to define a literal. I say, since a Shakespearean sonnet is by definition a sonnet composed by Shakespeare the "overhead required" must specify all that is Shakespeare. Clearly that is a lot of information Since Shakespeare is a non computable function the value of C in this case is infinite this is not hard peace fifthmonarchyman
* There are probably even shorter descriptions, as the English language is generally compressible. Zachriel
fifthmonarchyman: Yes but is there any reason it can’t be more than one bit? There are an infinite number of possible algorithms of all possible lengths, however, Kolmogorov Complexity is defined as the shortest description. In this case, it's the length of the string plus whatever overhead the language requires to define a literal. Zachriel
Zac says As low as a single bit, the bit determining whether what follows is to be read as a literal. I say, Yes but is there any reason it can't be more than one bit? Is there an upper limit to it's size? How would you determine what the upper limit was? Peace fifthmonarchyman
Gary S. Gaulin: The contradiction is that new phyla and species suddenly emerged instead of the usual “undergone modification” that came afterward. Not sure what you mean. Perhaps you could provide a specific example. Zachriel
fifthmonarchman: Can you speed the process up a little bit by giving me the value of C? As low as a single bit, the bit determining whether what follows is to be read as a literal. Zachriel
Zachriel:
Gary S. Gaulin: I’m recalling contradictions such as Charles Darwin’s predicting the opposite of “punctuated eqilibrium” would be discovered in the fossil evidence. Actually, Darwin predicted just the opposite. the periods during which species have undergone modification, though long as measured in years, have probably been short in comparison with the periods during which they retain the same form. — Darwin, “Origin of Species”
I have to agree that the Cambrian Explosion can be said to be one of the "periods during which species have undergone modification". The contradiction is that new phyla and species suddenly emerged instead of the usual "undergone modification" that came afterward. There was no mention of that, due to Darwinian theory being unable to predict these events (like the theory that I defend easily did). Gary S. Gaulin
At first glance it appears that the value of C is not computable. So the most you can say is that an algorithm can not determine if the total Kolmogorov Complexity in an IC string is infinite. With that I completely agree in fact that is my point. I'll study some more peace fifthmonarchyman
Thanks Zac, I'll study this. Can you speed the process up a little bit by giving me the value of C? peace fifthmonarchyman
fifthmonarchyman: According to the standard formal definition it is infinite if the string is IC. K(X|l(x)) is less than or equal to l(x) + C http://www.cs.princeton.edu/courses/archive/fall11/cos597D/L10.pdf Zachriel
#846 addendum FTR: Post #796 contains the post #s associated with the failed discussion. Dionisio
zac says How many more terms do you intend to redefine? I say. I have not redefined any terms. I have used a rough and ready everyday dictionary definition of "computable" to save time and facilitate discussion on an informal internet blog. If I was to present my ideas in a formal paper I would be sure to specify that I'm not using the less restrictive mathematical definition of that term at the outset just as I have done repeatedly during this very thread. As far as Kolmogorov Complexity goes I'm using the standard formal definition. you say, The Kolmogorov Complexity of a finite string is not infinite. I say, According to the standard formal definition it is infinite if the string is IC. This is just not a debatable point. I'm sorry that you have no room in your Worldview for these very simple concepts but that is your problem not mine. In the mean time lets do science. If you think the Kolmogorov Complexity of an IC object is finite prove it write an algorithm that will fool the observer. peace fifthmonarchyman
fifthmonarchyman: since the computability resources needed to specify an IC object is infinite the Kolmogorov complexity of said object is infinite by definition. The Kolmogorov Complexity of a finite string is not infinite. How many more terms do you intend to redefine? Zachriel
Gary S. Gaulin @819 I don't have time to squander it on senseless discussions with folks who get upset when someone asks them simple questions. Why did you turn to name calling and personal attacks, as you did in this thread and in the 'third way' thread? Can't you just stick to the discussed subject? Is it because you don't like discussions outside your comfort zone? One who has strong arguments can be magnanimous to others. But if we lack strong arguments, we should humbly admit it. Or ask for additional clarification if the questions are not understood well. The only positive thing out of this could be that the discerning onlookers/lurkers can read what was written and arrive at their own conclusions. I wish the best to you. :) Dionisio
zac says In any case, that supports the fact that the Kolmogorov Complexity of a finite string can’t be infinite as claimed. I say, You are missing the point. In an irreducibly complex string an algorithm will never be able to produce those finial few bits. from here http://en.wikipedia.org/wiki/Kolmogorov_complexity Kolmogorov_complexity of an object, such as a piece of text, is a measure of the computability resources needed to specify the object. since the computability resources needed to specify an IC object is infinite the Kolmogorov complexity of said object is infinite by definition. That is what we mean by irreducibly complex me_thinks said. Don’t bestow incredible powers of detecting double meaning to Kolmogorov Complexity I say, I am simply using the standard definition.Kolmogorov Complexity does not detect anything it measures something. "The computability resources needed to specify an object" Peace fifthmonarchyman
The claim wasn't wrt a string Joe
Wikipedia: It can be shown that the Kolmogorov complexity of any string cannot be more than a few bytes larger than the length of the string itself. You do need to call the identity function or equivalent. (In a reductive language, that could be the string itself.) In any case, that supports the fact that the Kolmogorov Complexity of a finite string can't be infinite as claimed. Zachriel
Zachriel, caught again:
Kolmogorov Complexity can’t be any greater than the length of the string.
Wikipedia on Kolmogorov Complexity: It can be shown that the Kolmogorov complexity of any string cannot be more than a few bytes larger than the length of the string itself. Joe
fifthmonarchyman @ 839,
Zac: Kolmogorov Complexity can’t be any greater than the length of the string. Shakespeare’s sonnets are finite in length (14 iambic pentameter lines) 5th : You are assuming no hidden or double meaning in the string.
Don't bestow incredible powers of detecting double meaning to Kolmogorov Complexity Me_Think
fifthmonarchyman: You are assuming no hidden or double meaning in the string. Kolmogorov Complexity can’t be any greater than the length of the string. If you mean something else, then you have to define it explicitly. fifthmonarchyman: Than there is in the sum of the information of each of the characters in the string taken individually. There's that word "information" again. If you are making a qualitative claim, then sure, there is a lot to interpreting one of Shakespeare's sonnets. But if you are making a quantitative claim, then you have to be much more precise in your definitions. As for infinity, anything can have infinite (or at least vast) meaning when attached to the real world. If one says "warm", it may evoke any manner of feelings, experiences, or memories. There is a one-to-one mapping between the lotus blossom and the universe (assuming each are a continuum). So contemplate the lotus blossom. Zachriel
Zac says Kolmogorov Complexity can’t be any greater than the length of the string. Shakespeare’s sonnets are finite in length (14 iambic pentameter lines) I say, You are assuming no hidden or double meaning in the string. There is a more information in the phrase "Me thinks it's a Weasel" Than there is in the sum of the information of each of the characters in the string taken individually. That is what we mean by Irreducible Complexity peace fifthmonarchyman
fifthmonarchyman: As I demonstrated earlier there is infinite Kolmogorov Complexity in any Irreducibly Complex configuration Kolmogorov Complexity can't be any greater than the length of the string. Shakespeare's sonnets are finite in length (14 iambic pentameter lines). Zachriel
ZAc says You’re saying there is infinite information in a sonnet? I say. As I demonstrated earlier there is infinite Kolmogorov Complexity in any Irreducibly Complex configuration That implication follows necessarily from the fact that such things are not computable* *Fine print: not able to be produced by a finite Turing Machine in a finite amount of time Peace Peace fifthmonarchyman
Unguided evolution cannot account for any flagellum. So that would still be a problem Joe
fifthmonarchy: That depends on the time inclination and resources of the observer. You're saying there is infinite information in a sonnet? Gary S. Gaulin: I’m recalling contradictions such as Charles Darwin’s predicting the opposite of “punctuated eqilibrium” would be discovered in the fossil evidence. Actually, Darwin predicted just the opposite. the periods during which species have undergone modification, though long as measured in years, have probably been short in comparison with the periods during which they retain the same form. — Darwin, "Origin of Species" Mung: Q1: What are the possible values returned by that function? Q2: Who or what chooses those values? We answered this above. The function, RandomLetter, returns a random letter (or space). It’s part of Dawkins’ original Weasel algorithm, which was what you requested. Of course, you can change this parameter if you want. Weasel is just a simple instance of a larger class of evolutionary algorithms. Zachriel
fifthmonarchyman: Perhaps I need to remind everyone of the definition of IC that I’m using. In a system composed of connected “mechanisms” (nodes containing information and causally influencing other nodes), the information among them is said to be integrated if and to the extent that there is a greater amount of information in the repertoire of a whole system regarding its previous state than there is in the sum of the all the mechanisms’ considered individually. Gee whiz. When you redefine well-established terms, it just leads to confusion. You should either find the correct term, or coin a new one. What you are describing is called synergy or emergence. Irreducible means you can't remove any of its parts and still have the same thing. http://www.youtube.com/watch?v=Q_UsmvtyxEI fifthmonarchyman: The greater information in the repertoire of a whole system is the fact that these and only these sonnets are Shakespearean. Huh? They're Shakespearean by definition. You haven't actually made a distinction. fifthmonarchyman: The greater information in the repertoire of a whole system is the fact that these and only these pieces constitute a working BF. So? And? Turns out that there are more than one type of bacterial flagellum, and the parts vary also. Zachriel
Mung: Is RandomLetter a function? What are the possible values returned by that function? Who or what chooses those values? Zachriel: Yes, it returns a random letter (or space). It’s part of the Dawkins’ original algorithm. Of course, you can change this if you want. It’s called an instance of a larger class. RandomLetter is a function. It returns a random letter or a space. Q1: What are the possible values returned by that function? Q2: Who or what chooses those values? Mung
Me_thinks lets keep our eye on the prize here shall we Remember that the key is where the meat/magic is. If an Algorithm is supplied with the Key producing a string that will fool the observer is easy. With no key the program will never fool the observer. That is my hypothesis. The question of whether or not the observer can intuit the key given feedback while interesting is irreverent to my argument because Algorithms can't intuit anything by definition, peace fifthmonarchyman
Me_think asks If your ‘game’ requires the designer to send feedback on whether something is right or wrong with every guess, how can you claim that you can Inuit specification/key ? I say, the designer does not tell me the key he just tells me whether my Idea of what the key is is correct. Imagine a small child asking his parent if a triangle is a circle and his parent pointing to another shape and saying "no this is a circle" . At some point the child will intuit what it takes to be a circle. That is all the feedback the observer gets from the designer it assumes that the key ("Ideal" circles) exists Now imagine a robot producing a random shape and a quality control agent throwing out every thing that is not a circle That is the feedback the programer gets it's exactly the same feedback the observer gets except it does not assume the key. That is the only difference you say, If those could be intuited, we wouldn’t need networked computers and cryptographers to break coded messages. I say, you are confusing intuition with mind reading these are not remotely the same thing peace fifthmonarchyman
fifthmonarchyman @ 798
Now the problem is that my game right now is in the form of a excel sheet so I will need to send that to you in order for you plug your strings in.
You already have the strings in this thread. Just copy and use in your excel sheet. Get feedback from it. If your 'game' requires the designer to send feedback on whether something is right or wrong with every guess, how can you claim that you can Inuit specification/key ? If those could be intuited, we wouldn't need networked computers and cryptographers to break coded messages. Me_Think
Or in other words evolutionary biology would gloss over another failed prediction, which is in this case more the fault of those who used the theory to predict something that the theory was not actually able to predict. The same is true of forming conclusions related to whether intelligence guided our genetic level development or it was unguided. Darwinian theory is simply not for explaining how intelligence (at any level) works. Future evolutionary biologists can easily enough say that I was right all along, and it's not the fault of the theory that some went overboard with it. So no matter how well you show that an idea some have is false Darwinian (evolutionary) theory would go on. Evolutionary Creationism also exists to even help cover the theory leading to a "God did it" answer, to cover all that only the ID model is for demonstrating. Gary S. Gaulin
fifthmonarchyman:
By Darwinian evolution I mean the idea that all of biology can be explained by RM/NS plus whatever. That Idea would be falsified
I'm recalling contradictions such as Charles Darwin's predicting the opposite of "punctuated eqilibrium" would be discovered in the fossil evidence. It's easy enough to keep the theory going just by adding a new phrase to its vocabulary, which in time makes it seem like that's what the theory all along predicted. Gary S. Gaulin
GSG said, Darwinian theory is too much of a generalization for it to thrown out by just one more thing that it could not predict I say, By Darwinian evolution I mean the idea that all of biology can be explained by RM/NS plus whatever. That Idea would be falsified peace fifthmonarchyman
zac says, How long is this representation? How many points of comparison are there? I say, That depends on the time inclination and resources of the observer. There is no upper limit. peace fifthmonarchyman
fifthmonarchyman:
If IC really exists then Darwinism is mathematically false.
I do not agree. Darwinian theory is too much of a generalization for it to thrown out by just one more thing that it could not predict, having to be added into it (at least another buzz-word) for the logical construct to remain scientifically coherent enough to keep the Darwinian empire going. Gary S. Gaulin
Perhaps I need to remind everyone of the definition of IC that I'm using. quote In a system composed of connected "mechanisms" (nodes containing information and causally influencing other nodes), the information among them is said to be integrated if and to the extent that there is a greater amount of information in the repertoire of a whole system regarding its previous state than there is in the sum of the all the mechanisms' considered individually. In this way, integrated information does not increase by simply adding more mechanisms to a system if the mechanisms are independent of each other. end quote: First imagine a set of all and only Shakespearean Sonnets. The greater information in the repertoire of a whole system is the fact that these and only these sonnets are Shakespearean. Next imagine a set containing the all and only individual components of a circle (The circumference, the diameter, etc), The greater information in the repertoire of a whole system is the fact that these and only these parts constitute a circle. Next imagine a set containing all and only individual components of a bacterial flagellum. The greater information in the repertoire of a whole system is the fact that these and only these pieces constitute a working BF. I hope you see obvious equivalence? peace fifthmonarchyman
zac says, There are an infinity of algorithms that can create any finite string. However, they may be more complex than the string itself. I say. That doesn't help you. let's say that algorithm 1 "Shakespeare" is less complex than the string and algorithm 2 "Marlowe" is more complex than the string. You still have two algorithms that can produce the same string directly contrary to the stated definition of IC. You say, There are very simple evolutionary pathways to irreducible structures. The simplest way is to knock out a scaffolding. I say, I know that is the talking point but as you have just conclusively demonstrated if by evolutionary we mean algorithmic then this is impossible!!!! On the other hand if you already had the "key" for an IC structure and wanted to produce an object that approximated it knocking out the scaffolding would be a good approach. peace fifthmonarchyman
fifthmonarchyman: There is the proof fifthmonarchyman: mathematically proven that Algorithms like RM/NS can not produce Irreducibly Complex configurations!! Um, no. There are an infinity of algorithms that can create any finite string. However, they may be more complex than the string itself. fifthmonarchyman: Now we are back to fooling the observer. You are claiming that you can fake IC with out actually producing it. No, we're claiming that we can make signatures that are very similar to one another. Think about what is meant by a signature. It's a representation of sorts of the patterns in the original string. How long is this representation? How many points of comparison are there? fifthmonarchyman: If IC really exists then Darwinism is mathematically false. There are very simple evolutionary pathways to irreducible structures. The simplest way is to knock out a scaffolding. Zachriel
ZAC says, No, there are an infinite number of algorithms that can produce any given finite string. I say, There is the proof folks!!!!! How flipping cool is that Zac says, The algorithms could be made close enough that the signatures would be indistinguishable, even if the outputs were superficially different. I say, Now we are back to fooling the observer. You are claiming that you can fake IC with out actually producing it. That is what the Game is all about. However we have now established that. If IC really exists then Darwinism is mathematically false. back to studying peace fifthmonarchyman
ZAc check this out, http://research.microsoft.com/pubs/70544/tr-2008-20.pdf I'm just now starting to research this, FUN FUN I don't know how it will turn out stay tuned fifthmonarchyman
Dionisio:
BTW, I’m a student, not a scientist. My scientific credibility is none, zero, nada, null. That’s why I ask simple questions in order to learn.
You are not asking learning questions like "Can an electronic sensor bit be connected to any memory address bit of an electronic RAM or do they have to be in some order?" you're just asking snotty questions that expect me to dedicate the next four or more years to tutoring you for free, so that you can teach me a punishing lesson about some imaginary error in my ways. Dionisio:
But apparently some folks don’t like my questions. Are my simple questions really that inconvenient?
Yes it is very inconvenient for me to have to pamper to your bratty demands. But since that's what you asked for I'll first ask the appropriate teacherly question normally used for getting to better know each other: What do you Dionisio want to be when you grow up? Gary S. Gaulin
fifthmonarchyman: When you do so you have created a new algorithm!!!! That's correct. While the outputs may appear very different, they would have similar signatures. fifthmonarchyman: Now you have a set containing all the sonnets produced by algorithm 1 and all the sonnets produced by algorithm 2 That's correct. fifthmonarchyman: Marlowe can’t produce Shakespearean sonnets by definition. Only Shakespeare can produce Shakespearean sonnets!!! The algorithms could be made close enough that the signatures would be indistinguishable, even if the outputs were superficially different. fifthmonarchyman: Now the question I have is there an output that can only be produced by one algorithm? No, there are an infinite number of algorithms that can produce any given finite string. Zachriel
Wow Zac This discussion has been worth the trouble for just your latest question. I think you have stumbled on a better way of expressing the point I've been trying to make. Thank you so much zac says. We could modify the algorithm ever so slightly, so that its output would closely resemble that of the other. I say, When you do so you have created a new algorithm!!!! Now you have a set containing all the sonnets produced by algorithm 1 and all the sonnets produced by algorithm 2 That directly violates my stated definition of IC!!!!! Lets call algorithm 1 Shakespeare and algorithm 2 Marlowe Marlowe can't produce Shakespearean sonnets by definition. Only Shakespeare can produce Shakespearean sonnets!!! Now the question I have is there an output that can only be produced by one algorithm? If there is not then ZAC has just mathematically proven that Algorithms like RM/NS can not produce Irreducibly Complex configurations!! These are strange and exciting times indeed. peace fifthmonarchyman
gpuccio @797
In my post #691 I have proposed a challemge, whose aim is to show that dFSCI is not an example of the TSS fallacy. IOWs, that invoking the TSS fallacy for the dFSCI procedure is a fallacy. The challenge was aimed at DNA_Jock, but it seems that he has not taken it seriously. OK.
As I explained to you at 720 and at 745, this restriction:
You are allowed to use the text, but not the specific bits of it, in your specification.
is incoherent, since the text comprises the specific characters. Either your challenge is trivial (and I have met it), or your analogy fails. My point has always been that your attempts to define a target for a protein exhibit the TSS fallacy. Your analogizing to text strings is hopelessly flawed. DNA_Jock
gpuccio: how can the environment give any hint about how to build a protein of thousands of aminoacids that is able to use a proton gradient to synthesize a much more “feasible” energy tool like ATP? The usual. Each step in the process provides an advantage to the organism. ATP synthase appears to be an association of two subunits, each of which is similar to other protein domains in the cell. Zachriel
Zachriel: It's not a case that you try to move the discussion to metabolism, and more general functional issues. You see, the real problem that you try to evade is: how can the environment give any hint about how to build a protein of thousands of aminoacids that is able to use a proton gradient to synthesize a much more "feasible" energy tool like ATP? gpuccio
fifthmonarchyman: If such a set could possibly exist my hypotheses would be that such that no other algorithm could reproduce it sufficiently enough to fool an observer infallibly. We could modify the algorithm ever so slightly, so that its output would closely resemble that of the other. fifthmonarchyman: Distinctive patterns are precisely what distinguishes Shakespearean sonnets from other data sets. So? What does it show? Zachriel
Hey gpuccio I think you should consider starting a new thread. By now I think you understand that your challenge is the mirror image of mine. As I said before It's just these sorts of strange equivalences that tell me we are on to something here. I think that this interesting topic needs to be seen by as many people as possible on both sides and this thread is so long and I'm afraid some are missing it. peace fifthmonarchyman
zac says If we took the output of a computer algorithm that writes sonnets, then it would be irreducibly complex too because “it contains all the sonnets and nothing else”. I say, I suppose so I had not thought of that. If such a set could possibly exist my hypotheses would be that such that no other algorithm could reproduce it sufficiently enough to fool an observer infallibly. Of course The no cheating clause still applies. That is something fun to think about. Is there any output that can only be produced by one algorithm? Is there a way to know this? you say, So, as we said, it’s just a measure of the personal distinctive patterns. That’s standard forensics in the art world. Not sure what that proves. I say, ID is simply forensics on a grander scale ;-) Distinctive patterns are precisely what distinguishes Shakespearean sonnets from other data sets. In other words they are the Key/specification/platonic form. They tell you what is unique about a IC artifact. For example Distinctive patterns tell you that you are looking at circle rather than a oval. peace fifthmonarchyman
fifthmonarchyman: The set of Shakespearean sonnets is Irreducibly Complex by definition it contains all Shakespearean sonnets and nothing else That's an odd definition of irreducibly complex. If we took the output of a computer algorithm that writes sonnets, then it would be irreducibly complex too because "it contains all the sonnets and nothing else". fifthmonarchyman: exactly, now you are getting it. So, as we said, it's just a measure of the personal distinctive patterns. That's standard forensics in the art world. Not sure what that proves. Zachriel
zac says, That doesn’t imply anything about irreducible complexity, just personality. I say, I don't establish that the set of Shakespearean sonnets is Irreducibly Complex by learning form. The set of Shakespearean sonnets is Irreducibly Complex by definition it contains all Shakespearean sonnets and nothing else You say, A sonnet by Marlowe will have a different pattern. A sonnet by someone modern will probably exhibit even more differences. I say, exactly, now you are getting it. peace fifthmonarchyman
fifthmonarchman: Ive done this with several different strings each one is slightly different Okay. You seem to just be looking for distinctive patterns in Shakespeare, like distinctive brush strokes in a Van Gogh. That doesn't imply anything about irreducible complexity, just personality. A sonnet by Marlowe will have a different pattern. A sonnet by someone modern will probably exhibit even more differences. Zachriel
zac says, What patterns do you find? Then show us how you improve it. I say, Ive done this with several different strings each one is slightly different For example I might see that the graph from the real string is more spiky than the false string (level one) Then once I reproduce that feature I might see that the real string turns up slightly every ten spots or so(level two). etc etc Slowly but surely move up the Y-axes as I learn the form/key/specification of the real string peace fifthmonarchyman
fifthmonarchyman: look at the patterns in the entire string and reproduce them. What patterns do you find? Then show us how you improve it. Zachriel
zac said, Shakespeare only wrote 154 sonnets, so it would be hard to fool anyone familiar with Shakespeare. I say, Not especially hard given that the observer is only looking at a numerical string. He has no easy way of knowing what the string is representing. In other words a Shakespeare expert is at no special advantage you say, Explain how they did it. I say, look at the patterns in the entire string and reproduce them. That gets you to level one on the y-axes just were the observer is at that time. Continue to improve your string as you and the observer learn the key at higher levels. peace fifthmonarchyman
Zac said More important, the failure of your game doesn’t show that algorithms are not capable of generating Shakespearean poetry, only that your specific implementation can’t. I say, For the sake of any possible lurkers here it's important at this point to make clear that my hypothesis is about much more than Shakespearean poetry. I'm claiming that an algorithm can not reproduce any irreducibly complex configuration sufficiently enough to infallibly fool an observer. That is any algorithm including those combining Random Mutation with Natural Selection. My game is the method I use to test my hypothesis. I hope I have explained the details and stipulations of this method sufficiently in this thread any questions? peace fifthmonarchyman
fifthmonarchyman: The algorithm is not being asked to reconstruct every possible sonnet it’s being asked to construct just one that will fool an observer for that knowing the form is more than sufficient Shakespeare only wrote 154 sonnets, so it would be hard to fool anyone familiar with Shakespeare. fifthmonarchyman: I already have and they can easily. Explain how they did it. fifthmonarchyman: I never claimed my game proves that it is impossible for algorithms to produce Shakespearean sonnets that’s why my approach is scientific and not philosophical. I have presented a scientific hypothesis we can only falsify it we can’t prove it. What does your game falsify? Or support? You might want to restate your hypothesis. Zachriel
Zac says, You can specify the form, but not reconstruct every possible sonnet. I say, The algorithm is not being asked to reconstruct every possible sonnet it's being asked to construct just one that will fool an observer for that knowing the form is more than sufficient you say, so we would suggest you give the sequence of numbers to some people and see if they can generate Shakespearean sonnets based on the patterns they see in the numbers. I say, I already have and they can easily. Here is where the levels of the y-axes come in. learning the key/specification of Shakespearean sonnets is not an instantaneous process. The first level of the key is probably something like structure and grammar the other layers in the y-axes form an objective nested hierarchy upward from there. A cool thing about the game is that both the observer and the person generating Shakespearean sonnets are discovering the levels of the form at the very same time. you say Suppose you might take every Shakespearean sonnet and homogenize them to extract the basic pattern. Not sure how useful that would be. I say. That is because homogenization is what algorithms can do. Humans on the other hand discover platonic forms it is a completely different process. homogenization is bottom up discovery is top down You say, What key? You mean what Platonic form constitutes a Shakespearean sonnet? I say, The "Platonic form" the "specification" the "nonlossy data compression" the "key" All of these terms are synonymous. That is the piece of information that must be programed into the algorithm at the very beginning. you say, Instinctively, your position appears to be mush. I say, It's possible even probable that I am doing a poor job with explanation but I assure you that my position is solid and mathematically sound you say, More important, the failure of your game doesn’t show that algorithms are not capable of generating Shakespearean poetry, only that your specific implementation can’t. I say, I completely agree. I never claimed my game proves that it is impossible for algorithms to produce Shakespearean sonnets that's why my approach is scientific and not philosophical. I have presented a scientific hypothesis we can only falsify it we can't prove it. The power of the game is the personal calculative revelation that every step you take toward Shakespeare requires exponentially more complex algorithms. peace fifthmonarchyman
gpuccio: So, environment transfers information to the genome about how to use a proton gradient to build ATP? By assembling thousands of specific aminoacids? Sorry. We had thought we were discussing how information is transferred from the environment to the genome. Optimization of ATP pathways is an example of that process. Metabolism is very ancient, and its evolution is still enigmatic. But like most of evolution, it's important to recognize that large complexes evolved in stages. That means a complete answer won't be found in a single place or event. There is evidence of how the eukaryote mechanism evolved when mitochondria invaded the cell, which was initially explored by Lynn Margulis as a fundamental of her endosymbiotic theory. The original ATP complex probably evolved in anaerobic conditions. http://www.ncbi.nlm.nih.gov/books/NBK26849/ Zachriel
Mung: https://uncommondesc.wpengine.com.....ent-531960 Mung: To post the entire code consider: https://gist.github.com/ It's easily available with the download. In any case, it's the algorithm that matters, not the specific implementation. Mung: I don’t care what you call them {parameters}, their choice is by design. Their value or range of values is by design. Sure, like Fgravity ? m1*m2/d^2. In order to better understand the relationship, we might create an algorithm, and test some examples with different masses, distances, and gravitational constants. In the past, people would use pencil and paper and little diagrams of cannons on mountain tops. Today we use computer simulations. http://www.dynamical-systems.org/threebody/ Mung: I infer that this iterates over each character in the string and that x defines the maximum length of the string. Is that correct? Who or what chooses the value of x? The length of "Methinks it is like a weasel". It's part of the Dawkins' original algorithm. Of course, you can change this if you want. It's called an instance of a larger class. Mung: Is RandomLetter a function? What are the possible values returned by that function? Who or what chooses those values? Yes, it returns a random letter (or space). It's part of the Dawkins' original algorithm. Of course, you can change this if you want. It's called an instance of a larger class. Mung: Iow, you have to decide whether to mutate a specific position. No. Mutation of any particular position is random. Mung: The decision to not allow whole or partial increases in the size (length) of “the genome” is also a design decision. It's part of the Dawkins' original algorithm. Of course, you can change this if you want. It's called an instance of a larger class. Mung: If you disagree I will write a program that varies all this and we can see which program reaches the target. Population genetics is a mature field. There have been many such simulations, and the mathematics has been worked out over generations. You asked for an implementation of Dawkins' Weasel. That's what you got. Your objections didn't require anything other than the algorithmic description. Mung: To do this Dawkins had to select a target phrase. You can use a random target, if you prefer. "Methinks it is like a weasel" is part of Dawkins' original algorithm. Of course, you can change this if you want. It's called an instance of a larger class. Mung: Then he chose to limit his mutations to only certain replacement characters. You can use a different character set, if you prefer. It's part of the Dawkins' original algorithm. Of course, you can change this if you want. It's called an instance of a larger class. Of course you realize that in genetics there are just twenty or so bases? In any case, the algorithm doesn't simulate biological evolution, but does show that evolutionary search is much faster than random search. Zachriel
fifthmonarchyman: Sure it does. The same way the nonlossy compression of one circle entails every other circle in the universe. You can specify the form, but not reconstruct every possible sonnet. fifthmonarchyman: If I fully and nonlossily understand what a Shakespearean sonnet is I can recognize one anywhere. So what you think is that we should be able to capture the essence of a Shakespearean sonnet, then replicate the form. Still not sure why putting it into numbers helps. In any case, you seem to think this distinguishes algorithms from people, so we would suggest you give the sequence of numbers to some people and see if they can generate Shakespearean sonnets based on the patterns they see in the numbers. fifthmonarchyman: With out the key/specification all the background knowledge in the universe is useless. You have to know what Platonic form constitutes a Shakespearean sonnet; three stanzas of four iambic pentameter lines with rhyme, ending with a rhyming couplet. That's along with spelling, grammar, some sort of unifying message, etc. Suppose you might take every Shakespearean sonnet and homogenize them to extract the basic pattern. Not sure how useful that would be. fifthmonarchyman: The algorithm has to have the key programed into it at the start. What key? You mean what Platonic form constitutes a Shakespearean sonnet? three stanzas of four iambic pentameter lines with rhyme, ending with a rhyming couplet. fifthmonarchyman: You instinctively know it to be true. Instinctively, your position appears to be mush. The real problem isn't the structure as found in a list of numbers, but the relationship between the writer and reader, which includes shared experiences that a simple algorithm could not easily encompass. More important, the failure of your game doesn't show that algorithms are not capable of generating Shakespearean poetry, only that your specific implementation can't. Zachriel
Me_Thinks, Ive been thinking about my response to your challenge and I believe I worded it unnecessarily harshly. What I should have said was I accept !!!!!! All I need you to do is feed your strings into my game and I believe with feed back I can identify which is Shakespeare sonnet and give the “specification/key” I used to make the identification. Now the problem is that my game right now is in the form of a excel sheet so I will need to send that to you in order for you plug your strings in. Is there a way I can contact you? peace fifthmonarchyman
To all who are interested: In my post #691 I have proposed a challemge, whose aim is to show that dFSCI is not an example of the TSS fallacy. IOWs, that invoking the TSS fallacy for the dFSCI procedure is a fallacy. The challenge was aimed at DNA_Jock, but it seems that he has not taken it seriously. OK. So, I propose it again here, for all who may be interested:
So, a challenge to you, in two parts. 1) Here is a 600 character sequence, which I have just generated by an online random character generator (no idea how good, truly random, it is, but I think it will do). So, here is the sequence: l.qvff..stscilrriegakbb oprzbdfbnguio.h odjjsvamrcxly mlbtihxqotillxqtifwfyalxc,vbjckobzdrjvyo.oo ,evitbhnwhyixjmyakripxjrylxcqebyeuprpipd,.yvtfbrl,qqqcuqqsmviuonqeyx eeyumkx, igzelxs hqpyriinyflyvpvblcrvbiljnk edhcnvycmikfwa,ghwuxspycpwn.mbqrcbcr w,iiqhwsd.. wcfn wuntehhj.y.sdweze.kjosyyobnsmryvw.xgyigvng nf cskcmguvl l d.eamqet.bgs,fyrcul.nq,xjexzhed.,zbigpdwssucer,ugavop.vowwz. cqmegaylpvj,khlfubz,ptt,wjbdgtuibuytprztqewhhadjhbu mssikwkqwqucxbzzqs kbjbnikehnviqdykgmjwyllhyasivg uexccpbcyowyv.vgladhihjnytzd ujnmoypvu,,blvymbxaxpx.jaoe,y.whwmib.nbfmrcsbpm,asyqgqdegs,fejv,jtu.cl i.grn qfsicb.w Now, I ask you to: 1a) Define any “definition-target” you like for that sequence. Please remember, you must not use the specific contingency in the sequence (the specific characters). 2a) Make it arbitrarily small, so that the result has extremely high functional complexity (1000 bits will do). OK? 2) Second part of the challenge. Maybe you are not happy with my sequence. Maybe it is one of the rare sequences for which it is difficult to paint a target and make it arbitrarily small. So, I will give you complete freedom in the second part of my challenge: Please, show me any sequence generated in a truly random way, for which you can paint a definition-target and make it arbitrarily small, so that it exhibits 1000 bits of functional information according to the defined function. OK? Good luck. A final disclosure: my sequence is really random (provided that the internet generator worked well). I did not design it in any way. I took the first one which came. I just decided the set of characters, including space, comma and period, and the length of the sequence (600 characters). No other intervention. So, if you conclude by my procedure that it is a negative, it will be a true negative. For a true positive, I maintain the Shakespeare sonnet. Or any post of sufficient length in this thread.
The challenge remains open. And equally open remains the other challenge: to exhibit one false positive to my dFSCI procedure by showing a 600 character long sequence, generated randomly, which has good meaning in English. EditMore Options gpuccio
FTR: here’s a summary of my (D) brief discussion with Gary S. Gaulin (GSG) in this thread moderated by gpuccio (G): D: 648 GSG: 668 D: 690 D: 693 G: 706 GSG: 732 D: 733 D: 734 D: 735 D: 736 GSG: 737 D: 738 D: 739 GSG: 740 GSG: 741 D: 742 D: 743 FMM: 744 D: 749 GSG: 772 D: 775 GSG: 776 D: 778 D: 783 D: 785 Then GSG decided to move the discussion to the ‘third way’ thread: https://uncommondesc.wpengine.com/evolution/a-third-way-of-evolution/#comment-532256 GSG: 686 D: 687 GSG: 688 GSG: 689 Then GSG switched back to this thread: GSG: 794 Note: FMM stands for fifthmonarchyman who volunteered his comments on the discussion. The onlookers/lurkers may read the referenced posts in the indicated sequence and arrive at their own conclusions. The referenced comments by gpuccio and fifthmonarchyman can provide a hint. BTW, I'm a student, not a scientist. My scientific credibility is none, zero, nada, null. That's why I ask simple questions in order to learn. But apparently some folks don't like my questions. Are my simple questions really that inconvenient? :) Dionisio
Zachriel: "You asked about the formation of the existing proton gradient used to build ATP." It does not become you to quote things imprecisely. What I asked: So, environment transfers information to the genome about how to use a proton gradient to build ATP? By assembling thousands of specific aminoacids? Emphasis added. I did not ask about "the formation of the existing proton gradient". Which would be an interesting issue too, because it also requires very complex enzymes. And again, there is nothing, in the paper you linked, about what I asked. gpuccio
Dionisio:
But now do you understand the problem that is being at the core of most serious biology-related discussions here and out there? Do you understand that’s the same problem that apparently has made a group of respected scientists to promote a third way of evolution, hoping to get somewhere?
I answered that in a few replies to the "A third way" thread: https://uncommondesc.wpengine.com/evolution/a-third-way-of-evolution/#comment-532256 To be truthful, the problem at the core of this most serious biology-related discussion are the know-it-alls constantly changing the subject away from what the premise of the "theory of intelligent design" even says. Instead of your setting a good example to follow by being as scientifically precise as science demands, you're teaching the opposite. Ignoring all that most matters in real (not pop) science has caused you to without knowing it knock yourself right out of the scientific arena. Nothing to even be a contender with. I can only use what little time I have to let you know what you're actually up against. How much more serious damage you do to yourself and those you represent by fighting it is all up to you. Gary S. Gaulin
Zachriel: https://uncommondesc.wpengine.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/#comment-531960 Mung
zac said, Um, no. The compression of one sonnet does not entail every other sonnet. That makes no sense. I say, Sure it does. The same way the nonlossy compression of one circle entails every other circle in the universe. Welcome to world out side the cave ;-) If I fully and nonlossily understand what a Shakespearean sonnet is I can recognize one anywhere. The set of Shakespearean sonnets is irreducibly complex it contains all Shakespearean sonnets and nothing else. You say, Why do that when it’s the algorithm that would be of interest? I say, I said if I wanted to.In this game we have only one objective that is to fool the observer. you say, If so, then such an algorithm would include much of the same background knowledge as Shakespeare, including knowledge of rhyme and rhythm, grammar and voice. I say With out the key/specification all the background knowledge in the universe is useless. You have admitted as much. You have better resources at you fingertips than Shakespeare could dream of. The thing is that unlike me an algorithm can never discover the key and make use of background knowledge. The algorithm has to have the key programed into it at the start. That is why you are struggling so much with this. You instinctively know it to be true. There is no way out of the cave for an algorithm. you say, If you were to create an algorithm to produce Shakespearean sonnets, you would work with rhyme and rhythm. I say, There is the admission for all to see. With out the key an algorithm is forever stuck trying build the unknown impossible one step at a time from the bottom up. Quite a contrast to what a designer can easily do once he discovers the key. You say, In any case, there would no point in turning it into numbers. I say And there is the feigned obtuseness right on cue peace fifthmonarchyman
gpuccio: There is nothing in that paper about protein sequences and how to get to the right enzymes. You asked about the formation of the existing proton gradient used to build ATP. The paper supports that the current system came about through a process of optimization. As this optimization is a clear advantage to the cell, it answers your more general question as to how environmental feedback can bring about complex adaptation. Zachriel
DNA_Jock: I disagree, but we are just repeating ourselves here, which is something that I really dislike. I had thought of other examples, but I am tired of this discussion. You have not accepted my challenge, as far as I can see, but that is your privilege. As I am the host in this thread, I will happily leave to you the last word. gpuccio
Zachriel: "As for ATP synthesis, there is evidence the current system came about through a process of optimization. See Ebenhöh & Heinrich, Evolutionary optimization of metabolic pathways. Theoretical reconstruction of the stoichiometry of ATP and NADH producing systems, Bulletin of Mathematical Biology 2001." There is nothing in that paper about protein sequences and how to get to the right enzymes. gpuccio
Any algorithm that can produce a sonnet by Shakespeare will contain the sonnet, rendering the point moot. Joe
Is there a way to approve and submit the post without waiting the 5 minutes? Dionisio
Just noticed the nice editing feature now available for a few minutes after posting comments. Thanks! Dionisio
#778 error correction
But now do you understand the problem that is being at the core of...
Dionisio
fifthmonarchman: The key is the specification!! In this case a Shakespearean sonnet. Or a sonnet similar enough to have been produced by Shakespeare or other such poet. fifthmonarchman: The key is nothing less than a nonlossy compression of the original string Um, no. The compression of one sonnet does not entail every other sonnet. That makes no sense. fifthmonarchman: If I wanted to put up the effort I could simply patriot existing sonnets and fool everyone but the experts into thinking I had produced an original work of Shakespeare. Why do that when it's the algorithm that would be of interest? Try to break it down. Are you trying to show that an algorithm can't produce a Shakespearean sonnet? If so, then such an algorithm would include much of the same background knowledge as Shakespeare, including knowledge of rhyme and rhythm, grammar and voice. In any case, there would no point in turning it into numbers. fifthmonarchyman: There are only two ways to produce a Shakespearean sonnet 1) be Shakespeare 2) copy Shakespeare I remove the string from it’s context in order to take the second option off the table You don't need the string of numbers for any purpose that we can see. If you were to create an algorithm to produce Shakespearean sonnets, you would work with rhyme and rhythm. Zachriel
#778 error correction
Can your theory explains the origin and functioning of that process? How?
Can your theory explain the origin and functioning of that process? How?
Dionisio
I'm so glad we have finally gotten to this point in the conversation it's been a long trip. Thanks for traveling it with me I feel as though there may be questions/objections and I want to make sure I am being understood so let her rip. peace fifthmonarchyman
Lest you think the feedback that the game gives me is some how cheating. Keep in mind Zac the programer gets exactly the same feedback. That is what is happening every time his algorithm fails to pass the test. peace fifthmonarchyman
Me_Think says, Well since you insist, can you identify which is Shakespeare sonnet among the following five strings? and if possible can you intuit the “specification/key” ? I say, Now you are getting the idea but your challenge is missing something however. In order to accomplish what you ask I need to begin to Lossless-ly compress data in the sonnet. My game facilitates this by giving feed back. Each time I think I have discovered the key/specification the game tells me if I am correct or not. By reflecting on this I'm able to pretty quickly tell if I'm on the right track You will notice something superficially similar in this approach to RM/NS. There is however a profound difference between what I'm doing and what an algorithm does. The difference is that my data-compression is nonlossy!!!!!!!!! Something that is beyond the abilities of algorithmic processes to accomplish. This strange magic difference between me and an algorithm is exactly what has been mathematically proven in the paper I keep referencing. You need to actually experience this spooky reality to fully grasp the weight of the insight. I've got to get the app done. Stay tuned peace fifthmonarchyman
fifthmonarchyman @ 777,
I’m also saying that since I’m not an algorithm I can “intuit/discover” the specification/key No more long fruitless discussions on internet blogs.
Well since you insist, can you identify which is Shakespeare sonnet among the following five strings? and if possible can you intuit the "specification/key" ? 1.a ghehx pyc ajya fwn sos lyogaogt ghhs, Ags ajhxhuwxh aw fwnx uyox gw lyogaogt pha; I uwngs, wx ajwntja I uwngs, fwn sos hqdhhs Tjya byxxhg ahgshx wu y lwha'p shba: Ags ajhxhuwxh jyeh I pvhla og fwnx xhlwxa, Tjya fwn fwnxphvu, bhogt hqayga, chvv zotja pjwc Hwc uyx y zwshxg knovv swaj dwzh aww pjwxa, Slhyrogt wu cwxaj, cjya cwxaj og fwn swaj txwc. Tjop povhgdh uwx zf pog fwn sos ozlnah, Wjodj pjyvv bh zwpa zf tvwxf bhogt snzb; Fwx I ozlyox gwa bhynaf bhogt znah, Wjhg wajhxp cwnvs toeh vouh, ygs bxogt y awzb. Tjhxh voehp zwxh vouh og wgh wu fwnx uyox hfhp Tjyg bwaj fwnx lwhap dyg og lxyoph sheoph. 2. cp uypa yp ajwn pjyva cygh, pw uypa ajwn txwc'pa, Ig wgh wu ajogh, uxwz ajya cjodj ajwn shlyxahpa; Ags ajya uxhpj bvwws cjodj fwngtvf ajwn bhpawc'pa, Tjwn zyfpa dyvv ajogh cjhg ajwn uxwz fwnaj dwgehxahpa, Hhxhog voehp copswz, bhynaf, ygs ogdxhyph; Woajwna ajop uwvvf, yth, ygs dwvs shdyf: Iu yvv chxh zogshs pw, ajh aozhp pjwnvs dhyph Ags ajxhhpdwxh fhyx cwnvs zyrh ajh cwxvs ycyf. Lha ajwph cjwz gyanxh jyaj gwa zysh uwx pawxh, Hyxpj, uhyanxhvhpp, ygs xnsh, byxxhgvf lhxopj: Lwwr, cjwz pjh bhpa hgswc's, pjh tyeh ajhh zwxh; Wjodj bwngahwnp toua ajwn pjwnvspa og bwngaf djhxopj: Sjh dyxe's ajhh uwx jhx phyv, ygs zhyga ajhxhbf, Tjwn pjwnvspa lxoga zwxh, gwa vha ajya dwlf soh. 3. ep oa uwx uhyx aw cha y coswc'p hfh, Tjya ajwn dwgpnz'pa ajf phvu og pogtvh vouh? Aj! ou ajwn oppnhvhpp pjyva jyl aw soh, Tjh cwxvs covv cyov ajhh vorh y zyrhvhpp couh; Tjh cwxvs covv bh ajf coswc ygs paovv chhl Tjya ajwn gw uwxz wu ajhh jypa vhua bhjogs, Wjhg hehxf lxoeyah coswc chvv zyf rhhl Bf djovsxhg'p hfhp, jhx jnpbygs'p pjylh og zogs: Lwwr! cjya yg ngajxoua og ajh cwxvs swaj plhgs Sjouap bna jop lvydh, uwx paovv ajh cwxvs hgiwfp oa; Bna bhynaf'p cypah jyaj og ajh cwxvs yg hgs, Ags rhla ngnphs ajh nphx pw shpaxwfp oa. Nw vweh awcyxs wajhxp og ajya bwpwz poap Tjya wg jozphvu pndj znxs'xwnp pjyzh dwzzoap. 4. sw op oa gwa coaj zh yp coaj ajya Mnph, Saoxx's bf y lyogahs bhynaf aw jop ehxph, Wjw jhyehg oaphvu uwx wxgyzhga swaj nph Ags hehxf uyox coaj jop uyox swaj xhjhyxph, Myrogt y dwnlvhzhga wu lxwns dwzlyxh' Woaj png ygs zwwg, coaj hyxaj ygs phy'p xodj thzp, Woaj Alxov'p uoxpa-bwxg uvwchxp, ygs yvv ajogtp xyxh, Tjya jhyehg'p yox og ajop jnth xwgsnxh jhzp. O! vha zh, axnh og vweh, bna axnvf cxoah, Ags ajhg bhvoheh zh, zf vweh op yp uyox Ap ygf zwajhx'p djovs, ajwntj gwa pw bxotja Ap ajwph twvs dygsvhp uoq's og jhyehg'p yox: Lha ajhz pyf zwxh ajya vorh wu jhyxpyf chvv; I covv gwa lxyoph ajya lnxlwph gwa aw phvv. 5. gf tvypp pjyvv gwa lhxpnysh zh I yz wvs, Sw vwgt yp fwnaj ygs ajwn yxh wu wgh syah; Bna cjhg og ajhh aozh'p unxxwcp I bhjwvs, Tjhg vwwr I shyaj zf syfp pjwnvs hqloyah. Fwx yvv ajya bhynaf ajya swaj dwehx ajhh, Ip bna ajh phhzvf xyozhga wu zf jhyxa, Wjodj og ajf bxhypa swaj voeh, yp ajogh og zh: Hwc dyg I ajhg bh hvshx ajyg ajwn yxa? O! ajhxhuwxh vweh, bh wu ajfphvu pw cyxf Ap I, gwa uwx zfphvu, bna uwx ajhh covv; Bhyxogt ajf jhyxa, cjodj I covv rhhl pw djyxf Ap ahgshx gnxph jhx bybh uxwz uyxogt ovv. Pxhpnzh gwa wg aj;jhyxa cjhg zogh op pvyog, Tjwn tye'pa zh ajogh gwa aw toeh bydr ytyog. Me_Think
#776 Gary S. Gaulin
Terry Newton has experimented with the David Heiserman model (that forms the basis of the theory I explain). Maybe they can help you understand it? A Self-Programming Autonomous Robot http://tnewton.solarbotics.net/robot1.html
As far as I can tell, the folks you mentioned are quite intelligent guys. Aren't they? Your reference points to another case where intelligent agents have created a self-programming autonomous robot. But we are discussing the elaborate cellular / molecular choreographies seen in the biological systems. The questions is, how did we get all that? At least we know it was not the result of the prolific creativity of the folks you mentioned in your post. But then what was it? Can your theory answer that question? How? Has anyone who understands it as well as you do ever tried to answer that fundamental OOL question using the precepts of your theory? Who? Definitely not the two folks you just mentioned. Who else? Are you aware of anyone out there who understands your theory as well as you do and has used it to show how the amazing biological systems scientists are discovering today got here? That's what I've been trying to say all this time, unsuccessfully, because I'm not a good communicator. But now do you understand the problem that is being at the core of most serious biology-related discussions here and out there? Do you understand that's the same problem that apparently has made a group of respected scientists to promote a third way of evolution, hoping to get somewhere? The amazing process that goes from the zygote to birth is still poorly understood, but it makes the self-programming autonomous robot look like a Lego set for toddlers. :) Can your theory explains the origin and functioning of that process? How? Thank you. Dionisio
Me_Think says, If I understand you, you are saying you can come up something similar to Shakespeare sonnet which will fool people, but an algorithm that has access to all the data that you have, can’t? I say, That is close but not exactly what I'm saying. I'm saying that an algorithm can never fool an observer unless it has the specification/key. I'm also saying that since I'm not an algorithm I can "intuit/discover" the specification/key given enough time and feed back. you say, I don’t see why not. It would depend on how good the algorithm is. I say, That is what makes my game so cool IMHO. We have a disagreement and we can test it empirically. No more long fruitless discussions on internet blogs. let's do science peace fifthmonarchyman
Terry Newton has experimented with the David Heiserman model (that forms the basis of the theory I explain). Maybe they can help you understand it? A Self-Programming Autonomous Robot http://tnewton.solarbotics.net/robot1.html Gary S. Gaulin
#772 Gary S. Gaulin
The theory is in fact applicable (by someone who understands it as well as I do) on any biological system or subsystem, in order to model their built-in mechanisms and their origin.
Do you know anyone, besides yourself, who understands your theory as well as you do? Someone in this UD blog? In another blog? I would like to talk to that person. Thank you. Dionisio
Zachriel:
You can adjust the population size and the mutation rate. Notice that we track fitness setbacks and reversions. Here’s the central bit of code (in VBA):
To post the entire code consider: https://gist.github.com/ I'm loathe to download spreadsheets off the web. Don't take it personally. :) Zachriel: "Those are called parameters." I don't care what you call them, their choice is by design. Their value or range of values is by design. Take this code: For t = 1 To tx ... I infer that this iterates over each character in the string and that x defines the maximum length of the string. Is that correct? Who or what chooses the value of x? Is RandomLetter a function? What are the possible values returned by that function? Who or what chooses those values? Iow, you have to decide whether to mutate a specific position. You have to decide whether to apply that algorithm to each character in the string. Once having decided you must decide on the replacement character. These are design decisions. The decision to not allow whole or partial increases in the size (length) of "the genome" is also a design decision. If you disagree I will write a program that varies all this and we can see which program reaches the target. There's one avoidable fact here. The program Dawkins wrote found the target phrase. To do this Dawkins had to select a target phrase. Then he chose to limit his mutations to only certain replacement characters. No Chinese allowed! Do you want to maintain he didn't do that by design? Mung
fifthmonarchyman @ 771 If I understand you, you are saying you can come up something similar to Shakespeare sonnet which will fool people, but an algorithm that has access to all the data that you have, can't? I don't see why not. It would depend on how good the algorithm is. Me_Think
[1]Even though I look at it as being pompous to expect more detail than that from someone a billion dollar system only funds the destruction of, RE: [1] I don’t quite understand what the first statement in bold characters mean. Can you explain it another way? Thank you.
The Templeton Foundation alone has given many millions to anti-ID corporations that (as in the case of BioLogos) openly promotes Theistic evolution (theistic evolutionism or evolutionary creationism). Talking about antiquating Darwinian theory is now like blasphemy against both science and religion. It's now believed that all (except minor details) has already been revealed by Charles Darwin who must forever be glorified, while all those who try to go past that are punished or ignored. Before going on into pages of detail that will take a week to write: I found out the hard way that the academic system keeps all inside academia where people like me have no connection to anyway. The money pits claiming to be defending science became science stoppers that I have no choice but to defeat.
[2]I would love to be able to already have a model to show all that you asked for. RE: [2] Apparently I misunderstood your paper. I thought your 46-page PDF document is the description of your theory, that could be applied (by someone who understands it as well as you do) on any biological system or subsystem, in order to model their built-in mechanisms and their origin. Can you explain this?
The theory is in fact applicable (by someone who understands it as well as I do) on any biological system or subsystem, in order to model their built-in mechanisms and their origin. But that's something you have to do for yourself. The reward is that it will be another scientific first that at least those who understand it will appreciate.
[3]All that had to be wasted, to suit academic politics, not science. RE: [3] I don’t quite understand what that last statement means.
Academic politics is where scientific theories are not based upon whether they make scientific sense or not. In academic politics what matters is which religious agenda it serves, the academic institution it came from, amount invested in science media publicity, etc.. Theory that simply gets around in science (by researchers of all ages knowing about it) has to fall through the cracks then be stepped on, thus the opportunity was wasted for you to now have open source models that (where science came first) would now exist. The bright side is that the model and its theory of operation (Theory of Intelligent Design) was already fairly judged by a community well able to fairly judge it at Planet Source Code, where even after protest that trashed its 5 globe rating it none the less won their superior coding award. Along with science experts who would love to have seen me get the right help the theory is none the less still making progress in science, which will from the ground-up eventually defeat the more financially and religiously motivated academic politics from those who judge theories by their title. With all said: Millions and millions of dollars were spent, to try to stop me (too). Regardless of the details for having made such unscientific exceptions I'm disgusted by how much was wasted by a system that cannot even keep up with what's new in science anymore. Gary S. Gaulin
me think says, Are you saying if I give you a sonnet scrambled as a series of numbers you can convert that back to sonnet without you knowing the scrambling code? I say, no I'm saying that if you tell me that a Shakespearean sonnet will pass the test I can easily Google Shakespearean sonnet and plagiarize a string that will fool an observer into thinking I have actually done something worthwhile. If I wanted to put up the effort I could simply patriot existing sonnets and fool everyone but the experts into thinking I had produced an original work of Shakespeare. That is the power of the key/specification. It does in one fell swoop what lossy algorithms can never do. peace fifthmonarchyman
fifthmonarchyman @ 769 Are you saying if I give you a sonnet scrambled as a series of numbers you can convert that back to sonnet without you knowing the scrambling code? Me_Think
Zac said, That’s because without the key, there’s no rhyme or rhythm, no sense or vision. I say, buckle up Now we are getting some where The key is the specification!! In this case a Shakespearean sonnet. The key is nothing less than a nonlossy compression of the original string That piece of information is the one thing that is not available to algorithms like Darwinian evolution. The "key" for the bacterial flagellum is something like "rotary motor" The "key" to 3.14159265358979...... is the ratio of an ideal circle's circumference to its diameter. The "key" to triangle is a polygon with three edges and three vertices. Once you have "the key" an IC artifact is easy to reproduce. If you don't have the key reproducing IC stuff is impossible That is the point peace fifthmonarchyman
fifthmonarchyman: 1) observe a string 2) discover all the relevant patterns in the string 3) reproduce all the relevant patterns in the string closely enough to infallibly fool an observer How are you going to discover the rhyme and rhythm of a poem from encoded numbers? Shakespeare couldn't, so not sure what you're trying to show. See you went back to "infallible", even though you had modified it to "most". fifthmonarchyman: However we can learn from our mistakes that is why your algorithm can not fool us infallibly. Sure, that means people are fallible. Not only that, without a specified criteria, they will inevitably come to different conclusions about a string of numbers. fifthmonarchyman: let Joey study the string and he can reproduce the patterns of course he won’t be able to reproduce them well enough to convince an observer he is Shakespeare. That's nonsense. Shakespeare couldn't emulate the string from just the numbers either. That's because without the key, there's no rhyme or rhythm, no sense or vision. The way it would normally be done is to provide the Shakespeare emulator with similar background knowledge to Shakespeare. It's not as if Shakespeare worked in a vacuum. Then see if the emulator can create suitable poetry. It wouldn't be necessary to be infallible because it is quite possible to find objective criteria sufficient to the task. And if the Shakespeare emulator can't properly emulate Shakespeare, it doesn't demonstrate that Shakespeare is non-computable. It may mean that the particular emulator is not sufficient. Zachriel
Zac says If you have to show someone their error, then it’s not infallible by definition. I say, I know you are not this dense Zac No one is claiming that people are infallible. We make mistakes all the time we are not algorithms after all. However we can learn from our mistakes that is why your algorithm can not fool us infallibly. You say, Joey, you have to figure out how to rhyme with a number substitute and no decoder ring. I say, let Joey study the string and he can reproduce the patterns of course he won't be able to reproduce them well enough to convince an observer he is Shakespeare. peace fifthmonarchyman
Zac says, You might try to explain the algorithm in steps. I say, 1) observe a string 2) discover all the relevant patterns in the string 3) reproduce all the relevant patterns in the string closely enough to infallibly fool an observer peace fifthmonarchyman
fifthmonarchman: I convert the string into numbers so you won’t be able to cheat by borrowing information from the original string on the sly. You still make no sense whatsoever. Shakespeare, you work in English. Joey, you have to figure out how to rhyme with a number substitute and no decoder ring. Go for it boys. The first one to write a Shakespearean sonnet wins! Zachriel
ZAc says Then what’s the point of putting it into numbers? I say, once again for probably the fourth time I convert the string into numbers so you won't be able to cheat by borrowing information from the original string on the sly. There are only two ways to produce a Shakespearean sonnet 1) be Shakespeare 2) copy Shakespeare I remove the string from it's context in order to take the second option off the table peace fifthmonarchyman
fifthmonarchyman: Infallibility means that no matter how many times the observer is shown his error he is unable to ascertain the difference in the strings. If you have to show someone their error, then it's not infallible by definition. fifthmonarchyman: My hypothesis is that you can’t fool most of the people all of the time. Okay. So instead of infallible, we have a statistically significant effect. Zachriel
Zac says, Your test required infallibility. If even one person disagrees then you have an indeterminate results. I say, Infallibility means that no matter how many times the observer is shown his error he is unable to ascertain the difference in the strings. You say, And in any reasonable population, you’ll always have a few who disagree. I say, Agreed, You can fool all of the people some of the time and some of the people all the time. My hypothesis is that you can't fool most of the people all of the time. peace fifthmonarchyman
fifthmonarchyman: If the specification for the original string is intelligible English and your algorithm produces a string that when translated yields intelligible English then it passes the test. Shakespeare can't do that unless he writes it in English first. Then what's the point of putting it into numbers? Does the algorithm work only in numbers? You're not making much sense. You might try to explain the algorithm in steps. 1. create poetry in English 2. translate it into numbers 3. people evaluate the string of numbers, everyone everywhere must agree whether the string of numbers exhibits (what?) Zachriel
zac says So you’re saying you show people a set of numbers, and they can tell, if they were translated back into English, whether they would constitute intelligible English? I say, If the specification for the original string is intelligible English and your algorithm produces a string that when translated yields intelligible English then it passes the test. you say, Is that your test I say, My test is simply an evaluation of whether your lossy algorithm is capable of producing an Irreducibly Complex artifact. Is there something here that is hard to understand? peace fifthmonarchyman
fifthmonarchyman: Actually if we chose to we can represent two Shakespearean sonnets numerically and compare your string to them both at the same time. So you're saying you show people a set of numbers, and they can tell, if they were translated back into English, whether they would constitute intelligible English? Is that your test? fifthmonarchyman: Actually if we chose to we can represent two Shakespearean sonnets numerically and compare your string to them both at the same time. Your test required infallibility. If even one person disagrees then you have an indeterminate results. And in any reasonable population, you'll always have a few who disagree. Zachriel
ZAc says He wouldn’t recognize it. I say, He would not recognize his sonnet if it was translated into Chinese or binary either. Your point is? He does not have to speak every language to be able to compose translate-able sonnets does he? I say, The infallible means that only an exact match will work because some people will certainly consider that the reasonable criteria. I say, Actually if we chose to we can represent two Shakespearean sonnets numerically and compare your string to them both at the same time. An observer would still be able to tell the fake from the originals. I've tried it and it works so far You say, Say we give you two sets of numbers. How would you compare them in terms of your test? I say. Take a look at the original paper my game is based on to get the idea. We look at what patterns are seen in the original that are not in the fake. fifthmonarchyman
Dionisio asks, By the way, can you reveal the meaning of the name you use here? 5th monarchy man? is it related to a historical fact? I say, The meaning is multifaceted. It's related to a historical fact an important text and to a famous individual among other things. I could give you an earfull if we ever were to share a beverage on a front porch but to get into all that now would derail this thread for sure. So I'll pass. peace fifthmonarchyman
fifthmonarchman: Of course he could pass it. He did pass it provided my string is a numerical representation of one of his sonnets He wouldn't recognize it. fifthmonarchman: precisely stated the algorithm is supposed to produce a string that would convince an observer infallibly that the specification observed in original string’s has been met. The infallible means that only an exact match will work because some people will certainly consider that the reasonable criteria. Your use of numbers to replace words does nothing that we can see. Say we give you two sets of numbers. How would you compare them in terms of your test? Zachriel
Zac says If you mean a structure such that removing a part causes it to lose its function, then the mammalian middle ear with a known evolutionary pathway meets the definition. I say, by Irreducible complexity I mean from here http://en.wikipedia.org/wiki/Integrated_information_theory In a system composed of connected "mechanisms" (nodes containing information and causally influencing other nodes), the information among them is said to be integrated if and to the extent that there is a greater amount of information in the repertoire of a whole system regarding its previous state than there is in the sum of the all the mechanisms' considered individually. In this way, integrated information does not increase by simply adding more mechanisms to a system if the mechanisms are independent of each other. peace fifthmonarchyman
ZAC said, Heh. Could Shakespeare pass your test? He would probably think it’s just a list of numbers. I say, Of course he could pass it. He did pass it provided my string is a numerical representation of one of his sonnets Zac said, If you want an algorithmic solution, you have to be very precise in what the algorithm is supposed to do. I say, I agree, precisely stated the algorithm is supposed to produce a string that would convince an observer infallibly that the specification observed in original string's has been met. You say, What observers? What criteria do they use? I say, Just like any Turing test any observer can judge for his self how good your algorithm is. The observer gets to subjectively choose his own criteria for deciding whether or not the second string is like the first. zac said, If we provide a random list of numbers from 1-9999, and they say it looks enough like the other list, it could be considered a positive result. Other observers will expect it to match exactly, so it will be a negative result. I say If an observer feels that a random list is the same as the original string he will gently be informed of his error. If he is unable to learn from his mistake you will have fooled him infallibly and passed the test in the subjective opinion of that observer. If you ever manged to fool a significant number of observers you would be on your way to falsifying my hypothesis That is how the game works. It usually only takes a few corrections before an observer can easily pick out the fake, In fact so far it has been laughably obvious for the most part. I'm sure that more talented programers could eventually make it a contest. But I confident in the math that the test will never be passed. peace fifthmonarchyman
Zachriel:
As for ATP synthesis, there is evidence the current system came about through a process of optimization.
Unguided evolution doesn't do optimization. And Darwin didn't know about ATP synthase.
The problem is that the definition of IC and the definition of CSI/FSCI/dFSCI/bCSI/FSCO-I/LCCSI keeps changing.
Evidence please.
If you mean a structure such that removing a part causes it to lose its function, then the mammalian middle ear with a known evolutionary pathway meets the definition.
Except there isn't any known evolutionary pathway for the mammalian inner ear. Zachriel is either lying or ignorant as it doesn't even know what genes were involved and without that there cannot be a known evolutionary pathway. Joe
fifthmonarchyman: All you have to do is write an algorithm that will mimic it closely enough to infallibly fool an observer. Your test appears to be incoherent. If you want an algorithmic solution, you have to be very precise in what the algorithm is supposed to do. Using an observer test is not necessary impossible, but what observers? What criteria do they use? If we provide a random list of numbers from 1-9999, and they say it looks enough like the other list, it could be considered a positive result. Other observers will expect it to match exactly, so it will be a negative result. At least our fanciful test was coherent; measuring the applause of Elizabethan theater audiences. Zachriel
fifthmonarchyman: The algorithm simply follows the predetermined decision of the programer it has to be that way. Yes. That's what we mean by an algorithm. fifthmonarchyman: If you or anyone else think that IC stuff can be produced algorithmically go for it produce an algorithm and put it to the test The problem is that the definition of IC and the definition of CSI/FSCI/dFSCI/bCSI/FSCO-I/LCCSI keeps changing. If you mean a structure such that removing a part causes it to lose its function, then the mammalian middle ear with a known evolutionary pathway meets the definition. fifthmonarchyman: All you have to do is write an algorithm that will mimic it closely enough to infallibly fool an observer. Heh. Could Shakespeare pass your test? He would probably think it's just a list of numbers. Zachriel
gpuccio: So, environment transfers information to the genome about how to use a proton gradient to build ATP? Yes, it does. While the evolution of metabolism is still enigmatic, there are other examples of complex structures evolving to show how the environment contributes to adaptation. It requires feedback via reproductive advantage, and a pathway to incremental improvement. As for ATP synthesis, there is evidence the current system came about through a process of optimization. See Ebenhöh & Heinrich, Evolutionary optimization of metabolic pathways. Theoretical reconstruction of the stoichiometry of ATP and NADH producing systems, Bulletin of Mathematical Biology 2001. gpuccio: Interesting indeed. The idea has been around for a while now. See Darwin 1859. Zachriel
744 fifthmonarchyman Thank you for your commentary, which points to some important issues associated with human communication. Also, you clearly highlighted a problem we have encountered when trying to understand Gary S. Gaulin's paper:
Gary S. Gaulin needs to be able to articulate his ideas in a way that we -the unwashed masses- can comprehend if he wants to get a proper hearing here. I truly sympathize with his plight. I know what it is like to believe you are on to something important only to be met with confused blank stares. It is beyond frustrating.
On top of that, to make things worse, communicating with me is not an easy task for anyone out there. I'm not an easy interlocutor for the reasons I have listed in previous posts, which might sound as joking, but are not. By the way, can you reveal the meaning of the name you use here? 5th monarchy man? is it related to a historical fact? Dionisio
Zachriel the projectionist:
You claimed you had support for your position, but it seems to be a matter of you having preconceptions, then claiming support which you can’t provide.
Exactly what we have been saying to you and your ilk for over 150 years. Go figure. Joe
Zachriel is still confused:
Indeed, if you have an oracle that can recognize meaningful phrases, then an evolutionary algorithm can find long meaningful phrases.
The alleged "evolutionary algorithm" is guided towards the solution. Real-life evolution is not supposed to be guided- it isn't supposed to have goals. Evolutionary algorithms to not simulate unguided/ blind watchmaker evolution. Zachriel is equivocating, as usual. Joe
DNA Jock is quick to criticize ID but very slow in trying to defend evolutionism. Joe
gpuccio @ 728
No. It is perfectly possible to adjust for the number of tests. For example, by adjusting our alpha level. The important point is that in design detection we are looking for extremely low probabilities. In that context, the number of possible tests is simply irrelevant.
Possible to adjust, but quite difficult; I have yet to see an IDist even try. When you assert “the number of possible tests is simply irrelevant” you are simply assuming your conclusion. No need to bother with any pesky math at all.
Is it so difficult to understand that you must give a specification (in meaning or function) which is independent from the specific bits you are observing?
The problem with your method is that the specification you give is NOT independent from the specific bits you have observed. As you have noted “sequence X maps to function Y”. You consider the specification “independent” because you don’t understand the mapping for proteins. The simple fact is that when you chose a “function”, you are indulging in TSS. You have never tested for Adenosine Pentaphosphate Synthase, because you have never observed it. As I noted in #161 The bullet holes have been in the wall since before any humans existed. Along comes John Walker: “Look at this bullet hole I found”. Others find a tight grouping of bullet holes around this one. Along comes gpuccio, paints a circle around the bullet holes and calls his circle “the functional specification for ATP synthase”. Does some calculations. Along comes Praveen Nina and others, and points to a bullet hole that is a long, long way away from Walker’s tight grouping, but still falls within gpuccio’s original specification “ATP synthase”. His calculations are destroyed. [because the sequence requirements for ATPsynthase have been blown wide open] In light of Nina et al, gpuccio does two things: 1) He re-draws his circle so that it now excludes Alveolata, and renames the Walker circle “the traditional ATP synthase” 2) he draws a brand-spanking-new circle around the Alveolata bullet hole(s) because it is a “very different complex molecule, made of many different protein sequences, and is a complex example of a different engineering solution”. This is classic Texas Sharp Shooter. DNA_Jock
Dionisio, You are asking exactly the right kinds of questions. Gary S. Gaulin needs to be able to articulate his ideas in a way that we the unwashed masses can comprehend if he wants to get a proper hearing here. I truly sympathize with his plight. I know what it is like to believe you are on to something important only to be met with confused blank stares. It is beyond frustrating. But the miracle of human language is such that any concept can be shared if you are willing to go to the considerable effort that is required. Take my conversation with Zac for example it has been a long hard trek but I believe he now is beginning to understand my argument. The near future will be very illuminating in this regard. Let's see if he gets it. peace fifthmonarchyman
#741 Gary S. Gaulin
D: Has anyone ever tested your theory or model, as far as you’re aware of, on cell fate specification/determination mechanisms?
Considering how the ID Lab model has been online since 2011 the question should be: Why am I not aware of a properly funded study to determine how well the model performs for modeling cell fate specification/determination mechanisms? Is academia asleep at the wheel?
Interesting questions indeed. Maybe someone else reading this thread would like to attempt answering them? I can't. This is above my pay grade. :) Perhaps KF, gpuccio or someone else can give us a hand with all these interrogations? Their insightful comments could shed additional light on this discussion that your interesting theory has generated here? Dionisio
#740 Gary S. Gaulin
D:Perhaps your explanation will generate a few questions about missing details? Maybe someone here in UD can help us to created a separate thread dedicated to this interesting issue we’re looking into here?
[1]Even though I look at it as being pompous to expect more detail than that from someone a billion dollar system only funds the destruction of, the idea of a separate thread sounds fun. My day job has been keeping me unusually busy and I don’t have much free time but would do my best to keep up with comments. [2]I would love to be able to already have a model to show all that you asked for. But I only have so much time and cannot afford proper research. [3]All that had to be wasted, to suit academic politics, not science.
RE: [1] I don't quite understand what the first statement in bold characters mean. Can you explain it another way? Thank you. RE: [2] Apparently I misunderstood your paper. I thought your 46-page PDF document is the description of your theory, that could be applied (by someone who understands it as well as you do) on any biological system or subsystem, in order to model their built-in mechanisms and their origin. Can you explain this? RE: [3] I don't quite understand what that last statement means. Note: Please, be aware that my reading comprehension is rather poor. My communication skills are almost nonexistent. My IQ score is about the same as my age, but it changes in the opposite direction. When someone says a joke at a weekend social gathering, I usually get it by Tuesday, after my wife explains it to me. Perhaps that's one of the reasons I like to ask simple questions. But you don't have to answer them all. I understand your daily job keeps you extremely busy, leaving no spare time to write in this forum. I appreciate the time you have taken to answer my questions so far, and look forward to hearing more from you on the interesting theory you have developed. I'll try to read your 46-page PDF document at my slow pace, to see if I can understand it and later test it on some of the examples posted in the 'third way' thread. Dionisio
Dionisio:
Has anyone ever tested your theory or model, as far as you’re aware of, on cell fate specification/determination mechanisms?
Considering how the ID Lab model has been online since 2011 the question should be: Why am I not aware of a properly funded study to determine how well the model performs for modeling cell fate specification/determination mechanisms? Is academia asleep at the wheel? Gary S. Gaulin
Dionisio:
Perhaps your explanation will generate a few questions about missing details? Maybe someone here in UD can help us to created a separate thread dedicated to this interesting issue we’re looking into here?
Even though I look at it as being pompous to expect more detail than that from someone a billion dollar system only funds the destruction of, the idea of a separate thread sounds fun. My day job has been keeping me unusually busy and I don't have much free time but would do my best to keep up with comments. I would love to be able to already have a model to show all that you asked for. But I only have so much time and cannot afford proper research. All that had to be wasted, to suit academic politics, not science. Gary S. Gaulin
737 Gary S. Gaulin #738 follow-up Has anyone ever tested your theory or model, as far as you’re aware of, on cell fate specification/determination mechanisms? Is there any documentation of any of those tests? Thank you Dionisio
737 Gary S. Gaulin
I explained a model (that does not leave “intelligence” out of the equation) for atom on up origins experimentation. Expecting more than that from a scientific theory is a red-herring argument, from those who seem too scientifically lazy to even test a scientific model that was given to them to test for themselves.
Have you ever tested that model yourself? Has someone else ever tested that model, as far as you're aware of? Can you point to the documentation of some of those tests? Thank you. Dionisio
Dionisio:
Are you familiar with the mechanisms operating in the asymmetric mitosis in human development? Can you apply your theory to describe the origin of any of those mechanisms?
I explained a model (that does not leave "intelligence" out of the equation) for atom on up origins experimentation. Expecting more than that from a scientific theory is a red-herring argument, from those who seem too scientifically lazy to even test a scientific model that was given to them to test for themselves. Gary S. Gaulin
#732 Gary S. Gaulin RE: #735 follow-up Perhaps your explanation will generate a few questions about missing details? Maybe someone here in UD can help us to created a separate thread dedicated to this interesting issue we're looking into here? Thanks. Dionisio
#732 Gary S. Gaulin RE: #734 follow-up In the thread about 'the third way' there are more examples of biological mechanisms that we could use to illustrate the algorithms of your theory. But perhaps the asymmetric mitotic division is better known and much easier to describe in spatiotemporal terms. I look forward to reading the detailed description of your theory applied to the given example. Thank you. Dionisio
#732 Gary S. Gaulin RE: #733 follow-up Your 46-page document seems very interesting, But I'm having problems associating it to a real example, like the asymmetric mitotic division, in spatiotemporal terms. Please, when applying your theory to describing the origin and functioning of the asymmetric mitotic mechanisms, make sure your algorithms clearly explain how different proteins involved in those mechanisms appear in the right locations, in the right amounts, at the right time. This mechanisms include, among others, the centrioles duplications, timely centrosomes migration to the poles, spindle checkpoints, kinetochores connection to microtubules, tension detection, associated regulatory networks and signaling pathways, etc. Thank you. Dionisio
#732 Gary S. Gaulin Thank you for your post. I looked at your 46-page paper, but could not read it all yet, because it is not easy for me to read it. Perhaps a real biology example would help. Are you familiar with the mechanisms operating in the asymmetric mitosis in human development? Can you apply your theory to describe the origin of any of those mechanisms? Thank you again. :) Dionisio
Dionisio:
I still don’t understand how we got these biological systems where different proteins are in the right location, in the right amount, at the right time, for so many different situations. Do you have any example of a step-by-step description of the process that created the complex mechanism researchers observe these days? Thank you.
In a single illustration intelligent cause (from behavioral cause) goes stepwise like this: https://sites.google.com/site/intelligenceprograms/Home/Causation.jpg The online text operationally defines "intelligence", has links to models, explains the step-by-step in the Introduction then in more detail in separate sections (further subdivided into the four required parts of the circuit) for Unimolecular Intelligence, Molecular Intelligence, Cellular Intelligence, Multicellular Intelligence and Human Multicellular Intelligence. There are also self-assembly ideas to help conceptualize where cellular organelles always came from by just happening (not evolving). https://sites.google.com/site/theoryofid/home/TheoryOfIntelligentDesign.pdf Researchers regularly "observe" things happen, without knowing much about their mechanisms work. What matters the most is understanding what is being observed. Scientific evidence has for decades been showing to me that living genomes are a cognitive circuit made of molecular components. Someone else who believes that it's impossible for intelligence to exist in living genetic systems will see what they want instead, when observing the same thing. Gary S. Gaulin
Just in case there are any lurkers who are coming in late to this conversation I will reiterate the no cheating clause. Any algorithm claiming to meet the challenge be checked to verify that it does not in any way reference the original string. peace fifthmonarchyman
zac says, One or none, depending on the algorithm. I say, no depending on the arbitrary decision of the programer. The algorithm simply follows the predetermined decision of the programer it has to be that way. It is this sort of silly confusion that lead me to define non-computable as I did in the first place. Zac said, You claimed you had support for your position, but it seems to be a matter of you having preconceptions, then claiming support which you can’t provide. I say, The support I will provide is not philosophical it is empirical. To paraphrase the wisest philosopher who ever lived. "If he did not believe Plato and he did not believe Godel he would not believe a even a slam dunk mathematical Proof." Here is the deal Zac finally after 2500 years we have moved from the time of philosophical argument to the time of empirical demonstration. If you or anyone else think that IC stuff can be produced algorithmically go for it produce an algorithm and put it to the test let me get you started beow is a numerical representation of an Irreducibly Complex artifact. All you have to do is write an algorithm that will mimic it closely enough to infallibly fool an observer. You have your challenge now Put up or shut up 3972 1401 2424 5547 5547 2424 2822 652 873 1519 2098 2316 3739 4279 1223 3588 4396 846 1722 1124 40 4012 3588 5207 846 3588 1096 1537 4690 1138 2596 4561 3588 3724 5207 2316 1722 1411 2596 4151 42 1537 386 3498 2424 5547 3588 2962 1473 1223 3739 2983 5485 2532 651 1519 5218 4102 1722 3956 3588 1484 5228 3588 3686 846 1722 3739 1510 2532 4771 2822 2424 5547 3956 3588 1510 1722 4516 27 2316 2822 40 5485 4771 2532 1515 575 2316 3962 1473 2532 2962 2424 5547 4412 3303 2168 3588 2316 1473 1223 2424 5547 4012 3956 4771 3754 3588 4102 4771 2605 1722 3650 3588 2889 3144 1063 1473 1510 3588 2316 3739 3000 1722 3588 4151 1473 1722 3588 2098 3588 5207 846 5613 89 3417 4771 4160 3842 1909 3588 4335 1473 1189 1487 4459 2235 4218 2137 1722 3588 2307 3588 2316 2064 4314 4771 1971 1063 3708 4771 2443 5100 3330 5486 4771 4152 1519 3588 4741 4771 3778 1161 1510 4837 1722 4771 1223 3588 1722 240 4102 4771 5037 2532 1473 3756 2309 1161 4771 50 80 3754 4178 4388 2064 4314 4771 2532 2967 891 3588 1204 2443 5100 2590 2192 2532 1722 4771 2531 2532 1722 3588 3062 1484 1672 5037 2532 915 4680 5037 2532 453 3781 1510 3779 3588 2596 1473 4289 2532 4771 3588 1722 4516 2097 3756 1063 1870 3588 2098 1411 1063 2316 1510 1519 4991 3956 3588 4100 2453 5037 4412 2532 1672 1343 1063 2316 1722 846 601 1537 4102 1519 4102 2531 1125 3588 1161 1342 1537 4102 2198 601 1063 3709 2316 575 3772 1909 3956 763 2532 93 444 3588 3588 225 1722 93 2722 1360 3588 1110 3588 2316 5318 1510 1722 846 3588 2316 1063 846 5319 3588 1063 517 846 575 2937 2889 3588 4161 3539 2529 3588 5037 126 846 1411 2532 2305 1519 3588 1510 846 379 1360 1097 3588 2316 3756 5613 2316 1392 2228 2168 235 235 3154 1722 3588 1261 846 2532 4654 3588 801 846 2588 5335 1510 4680 3471 2532 236 3588 1391 3588 862 2316 1722 3667 1504 5349 444 2532 4071 2532 5074 2532 2062 1352 3860 846 3588 2316 1722 3588 1939 3588 2588 846 1519 167 3588 818 3588 4983 846 1722 846 3748 3337 3588 225 3588 2316 1722 3588 5579 2532 4573 2532 3000 3588 2937 3844 3588 2936 3739 1510 2128 1519 3588 165 281 1223 3778 3860 846 3588 2316 1519 3806 819 3588 5037 1063 2338 846 3337 3588 5446 5540 1519 3588 3844 5449 3668 5037 2532 3588 730 863 3588 5446 5540 3588 2338 1572 1722 3588 3715 846 1519 240 730 1722 730 3588 808 2716 2532 3588 489 3739 1163 3588 4106 846 1722 848 1438 618 2532 2531 3756 1381 3588 2316 2192 1722 1922 3860 846 3588 2316 1519 96 3563 4160 3588 3361 2520 4137 3956 93 4189 4124 2549 3324 5355 5408 2054 1388 2550 5588 2637 2319 5197 5244 213 2182 2556 1118 545 801 802 794 415 3748 3588 1345 3588 2316 1921 3754 3588 3588 5108 4238 514 2288 1510 3756 3440 846 4160 235 2532 4909 3588 4238 3972 1401 2424 5547 5547 2424 2822 652 873 1519 2098 2316 3739 4279 1223 3588 4396 846 1722 1124 40 4012 3588 5207 846 3588 1096 1537 4690 1138 2596 4561 3588 3724 5207 2316 1722 1411 2596 4151 42 1537 386 3498 2424 5547 3588 2962 1473 1223 3739 2983 5485 peace fifthmonarchyman
Zachriel: "Of course information has to be transferred from the environment to the genome. That’s how it works." So, environment transfers information to the genome about how to use a proton gradient to build ATP? By assembling thousands of specific aminoacids? Interesting indeed. gpuccio
DNA_Jock: "I agree that the location(s) of the bullet hole(s) constrains where we apply paint, but it does not determine where we apply paint, as you admit here (“We can choose”) and you illustrate below." That is simply because we can use any functional definition. The procedure infers design for any functional definition which implies an extremely high complexity. It has nothing to do with TSS. It derives from the simple observation that in huge search space the functional spaces which are really small are completely beyond the range of random search. It's the same reason why the molecules in a gas never assume an ordered configuration (you know, it's called second law of thermodynamics, and it is a law of information). "Multiple issues here: 1 Never mind “concentric circles”, the target is only contiguous if we are painting in “function space”, not in “sequence-space”. You have no clue about its shape in either case." No. The idea of concentric circles was about levels of the same function. If I define the level as "at least x", I am excluding the lower levels (the outer circles). IOWs, I am making the target smaller. My point was, and is, that we cannot make it "arbitrarily smaller", because otherwise it will no more include the observed hit. I am speaking of levels of the defined function. What is your problem? If my enzyme accelerates the reaction of n times, I cannot define the target as "accelerating the reaction 2n times or more", because in that way I certainly make it smaller, but so small that my hit is no more in it. "2 As you freely admit, you are selecting the size of the target based on the observed bullet holes. This is TSS." No. This is simply using the observed effect size to define a rejection region. It has nothing to do with TSS. "Thereby dramatically reducing the degree of constraint…" No. As said, we can choose any function. The only constraint here is that the function must really have existed before we describe it. We cannot "invent" a function. We can only choose what to describe among things that really exist. "If you are using Fisherian testing, that “complete liberty” is a death-knell for the validity of your test. As I have explained previously.(#556) when I said “Fisherian testing is also sensitive to the number of tests you might have performed. Imagine the jelly bean researchers had tested green first, then stopped…” and “If you look at your data, and then start doing Fisherian tests on it, you will produce garbage results.”" No. It is perfectly possible to adjust for the number of tests. For example, by adjusting our alpha level. The important point is that in design detection we are looking for extremely low probabilities. In that context, the number of possible tests is simply irrelevant. That's why my example of a 600 character sequence hold true for my specification "having good meaning in English". And it holds true for the specification "having good meaning in any known language". And it holds true for the specification "having good meaning in any pre-existing language that we may discover in the future, on other planets, everywhere". Why? Because my lower level for the functional complexity of that sequence in English is 637 bits, with Roy's correction. The true functional complexity is certainly much higher. So, let's say that there are 100000 known languages with a good structure. Let's say that that means about 17 bits more in the target space. The functional complexity's lower threshold remains 620 bits. Now, how many languages can exist in the whole universe? 500 bits is the number of quantum states from the Big Bang to now. The number of languages that really existed anywhere, any time, in this universe is certainly much lower. Let's say 2^200? 200 bits? That would leave us still with 420 bits of functional information. What we cannot do is to build a new language based on the characters in the observed string. So, I cannot say: From now on, I define a new language where l.qvff..stscilrriegakbb means "My" oprzbdfbnguio.h means "dog" odjjsvamrcxly means "is eating". And so on. That's what I mean by saying: "Please remember, you must not use the specific contingency in the sequence (the specific characters)." You say: "3 You CAN make the target arbitrarily small, subject to certain constraints, as you illustrate below." No. Not arbitrarily small, as I have explained. You say: "And so long as I can continue to generate “explicit functional motivations” (which is fairly easy), I can keep making my target smaller and smaller and smaller. " No. I can only choose functions with a smaller target among those which exist. I cannot make a function that does not exist. As you cannot invent a meaning for my random sequence. And about the level of the function, I can make it only as small as the observed object allows. You say: "As you know, I have never accepted that texts are a relevant analogy. " They are not an analogy. They are a true example of design inference. No analogy at all. You say: "I do not understand your “specific contingency in the sequence” requirement here, and it strikes me as problematic." I have explained it very clearly. The only problematic thing is that you insist in not wanting to understand it. As shown by what follows. You say: "Am I allowed to use the text to come up with my specification? If I am not allowed, then the analogy fails: the protein-specifier IS using the observed functionality to come up with the specification." You are allowed to use the text, but not the specific bits of it, in your specification. The same as for proteins. "A sequence which has good meaning in English" is valid" "A sequence which starts with "Why" and has n "a", m "b", and where "Why" is followed by "is", and so on, is not valid. IOWs, you cannot describe the sequence in detail while specifying it. Nor use the information about the sequence itself. So, "a proteins which accelerates reaction x" is valid. A proteins which starts with starts with "MLDDRARMEA", and has n glycines and m alanines, and so on, is not good. You say: "If I am allowed, then the challenge becomes trivial. My specification takes the form “After decryption with algorithm X, the string becomes a passage in “good English” that describes [insert arbitrarily narrow specification of the passage’s content here]”" It is as trivial as it is wrong. That's exactly what you cannot do. how would you build the algorithm X, if not by inventing it explicitly for my random sequence? I am good at it too! Let's try. An algorithm which says: The first character, whatever it is, becomes "W" The second becomes "h". The third becomes "y" The fourth becomes a space. The fifth becomes "i" The sixth becomes "s". And so on. And, miracle! we have the Shakespeare's sonnet. See, I am even better than you are: I don't even need to know the details of the random sequence. And I am better than Dawkins: I have got a whole sonnet. Is it so difficult to understand that you must give a specification (in meaning or function) which is independent from the specific bits you are observing? And that you cannot input the necessary functional information in your specification? But I have no hope that I will ever convince you. Maybe we should stop here, or wherever you want. gpuccio
Zachriel to fifthmonarchyman:
You claimed you had support for your position, but it seems to be a matter of you having preconceptions, then claiming support which you can’t provide.
That's a good capsule summary of ID. keith s
gpuccio: An oracle in this case means intelligent selection, added information. Of course information has to be transferred from the environment to the genome. That's how it works. gpuccio: The abilities of an evolutionary algorithm are severely limited by the pre-defined “meanings” and procedures in the algorithm. It's the same way people learn. Babies smack their lips, and their parents reward them with affection for having said "Papa!" gpuccio: How can you compare that with the conscious appreciation of meaning, which can recognize tons of new original meaning, and generate it? Meaningful phrases is YOUR criteria! Your claim was that the sparseness precludes random search, which is correct, but it doesn't preclude evolutionary search. Zachriel
Zachriel: "Random search, but not necessarily for evolutionary search. Indeed, if you have an oracle that can recognize meaningful phrases, then an evolutionary algorithm can find long meaningful phrases. That would only be true if they were connected within sequence space, no matter how rare they might be." Yes, and so? An oracle in this case means intelligent selection, added information. And the functional space must be connected. The oracle would be complex, and aware of which phrases are meaningful. The abilities of an evolutionary algorithm are severely limited by the pre-defined "meanings" and procedures in the algorithm. How can you compare that with the conscious appreciation of meaning, which can recognize tons of new original meaning, and generate it? Whatever you can say, the inner experience: "This means something" is the true driver of all our cognition. An algorithm, however complex, never has that experience. An algorithm simply has no idea of what "meaning" means. gpuccio
fifthmonarchyman: How many repeating decimals does it take to determine they will always repeat? One or none, depending on the algorithm. In the case of one, it expands the fraction keeping track of the state of each result. When a state repeats, then the algorithm stops, (unless it ends otherwise). In the case of none, it converts the fraction to lowest terms, then calculates the prime factors of the denominator. If there is a prime factor for the denominator that is not also a prime factor of the digital base, then it is a repeating expansion. These are the same standard algorithms people use. Repeating numbers are computable. That means an algorithm can calculate an arbitrary nth digit in finite time. Zachriel
Zac said. That is incorrect. There are algorithms that can recognize repeating decimals. They stop. I say, How many repeating decimals does it take to determine they will always repeat? How does the algorithm know that the next one will not vary? Peace fifthmonarchyman
fifthmonarchyman: It just calculates blindly forever. That is incorrect. There are algorithms that can recognize repeating decimals. They stop. fifthmonarchyman: Your worldview just has not equipped you to see what is obvious to folks on my side of the divide. You claimed you had support for your position, but it seems to be a matter of you having preconceptions, then claiming support which you can't provide. Zachriel
zac said, And a simple algorithm can calculate one-third to an arbitrary numbers of decimal places—just like Shakespeare. i say, yes an algorithm is an algorithm nothing special about who does the calculating You say, Indeed, while your Shakespeare intuits, an algorithm can determine that it is a repeating decimal, which means it knows the value of any arbitrary digit. I say No!!!!! an algorithm does not know anything!! It just calculates blindly forever. It takes Shakespeare to know. To stand with hands out and say stop I've seen enough. Zac this is getting to be repetitive. I know I will never convince you of this stuff. Your worldview just has not equipped you to see what is obvious to folks on my side of the divide. lets just agree to disagree and get to the science. That is unless you have some genuine questions peace fifthmonarchyman
gpuccio @ 691 :
So, you are admitting that the functions exist before. How we define them is a human construct. Well, that is exactly my point. You prefer to call “target” the human definition. I prefer to call “target” the existing function. What’s the difference?
The difference is whether the application of paint is an example of the TSS fallacy or not. IOW the subject under discussion…
The important point is: The target which already existed (the function” limits and constrains the human target (the definition of the function and of its level). That is my point. And that is where you are wrong. We can choose different “function-targets”. We can define different functions which already existed. But we cannot invent a function.
I agree that the location(s) of the bullet hole(s) constrains where we apply paint, but it does not determine where we apply paint, as you admit here (“We can choose”) and you illustrate below.
IOWs, if we are defining a “definition-target” for a “function-target”, and if the “function-target” is made of concentric circles, we can choose the threshold as “the three innermost circles” only if the hit is in them, otherwise we must extend out target so that it includes the hit, and therefore we make it bigger. That’s why your point that we can make our target “arbitrarily small” is simply wrong.
Multiple issues here: 1 Never mind “concentric circles”, the target is only contiguous if we are painting in “function space”, not in “sequence-space”. You have no clue about its shape in either case. 2 As you freely admit, you are selecting the size of the target based on the observed bullet holes. This is TSS. 3 You CAN make the target arbitrarily small, subject to certain constraints, as you illustrate below.
IOWs, we can choose different “function-targets” for our “definition-target”, but we are constrained by the existing “function-targets”. And we can choose arbitrary thresholds in our definition, but we are constrained by the observed effect size.
Constrained, not determined. Thus TSS.
1) How do we choose the function? It’s simple. Provided that a) is satisfied (the function was already existing and was definable even before the observation), we can define ant function we like (obviously, we will chose some function which is implemented by the observed object). There is no problem here. Why? Because in my dFSCI procedure I have explicitly said that any observer can define any function for any object, including many different functions for the same object.
Thereby dramatically reducing the degree of constraint…
IOWs, it is sufficient to be able to find one function which, explicitly defined, implies dFSCI to infer design. So, we have complete liberty in choosing the target, because the target itself is not so important: what is important is the complexity linked to it.
Yes. And [my emphasis] this is very problematic.
2) How do we define the level of function necessary to consider it present? That is simple too. It’s what we always do in Fisherian inference. The level is: “at least the level of function observed in the object”.
If you are using Fisherian testing, that “complete liberty” is a death-knell for the validity of your test. As I have explained previously.(#556) when I said “Fisherian testing is also sensitive to the number of tests you might have performed. Imagine the jelly bean researchers had tested green first, then stopped…” and “If you look at your data, and then start doing Fisherian tests on it, you will produce garbage results.”
And, if we use the general principle of defining the threshold as “at least x”, where x is the observed effect size, we have no arbitrary choice in the definition of the threshold: we are completely constrained.
Which would be all fine and dandy if you had defined the function before you saw the data. Regarding my adenyl kinase example, you note:
The second definition is obviously a subset of the first, and it has explicit functional motivations
And so long as I can continue to generate “explicit functional motivations” (which is fairly easy), I can keep making my target smaller and smaller and smaller. Note that your willingness to move from definition A to definition B is an example of re-painting the target more narrowly in light of new information. Likewise your wonderful move from “ATP synthase” to “traditional ATP synthase”.
Now, the final challenge. Let’s go back to my original example of language. After all, your objections are methodological, and if they are true they must apply to language too. So, a challenge to you, in two parts. 1) Here is a 600 character sequence, which I have just generated by an online random character generator (no idea how good, truly random, it is, but I think it will do). So, here is the sequence: [seq snipped] Now, I ask you to: 1a) Define any “definition-target” you like for that sequence. Please remember, you must not use the specific contingency in the sequence (the specific characters).
As you know, I have never accepted that texts are a relevant analogy. I do not understand your “specific contingency in the sequence” requirement here, and it strikes me as problematic. Am I allowed to use the text to come up with my specification? If I am not allowed, then the analogy fails: the protein-specifier IS using the observed functionality to come up with the specification. If I am allowed, then the challenge becomes trivial. My specification takes the form “After decryption with algorithm X, the string becomes a passage in “good English” that describes [insert arbitrarily narrow specification of the passage’s content here]” Like I have always maintained, the meaning of texts is a useless analogy. The evolution of languages might be a more fruitful analogy. My take-home: because of the multi-dimensional nature of the “function-space”, attempts to apply paint after observing activities is fraught with problems of post-hoc invalidity. Additionally, you have absolutely no clue about the size of the target in “function-space”, nor the shape & size of the target in sequence-space. Finally, using the threshold “effect size at least as great as the observed effect size” does not get you away from the TSS problem (since you chose your function), but it leads to a whole new challenge in constructing your null. You would be better off using a “minimum selectable” threshold, but I quite understand your unwillingness to go there. DNA_Jock
fifthmonarchyman: No Shakespeare has not computed 1/3 he has “intuited” it after a calculation lasting long enough to satisfy his curiosity And a simple algorithm can calculate one-third to an arbitrary numbers of decimal places—just like Shakespeare. Indeed, while your Shakespeare intuits, an algorithm can determine that it is a repeating decimal, which means it knows the value of any arbitrary digit. Mung: Since you insist on being pedantic, where can I find a copy of Dawkins’ original program? Don't think it was ever published, however, the algorithm —which is what counts — was published by Dawkins in "The Blind Watchmaker". Mung: List of designed features: Population Size Mutation Rate “Genome” Length The character set The “fitness” function Those are called parameters. We can vary those to determine their relationship, and then we can actually measure those for real populations. Mung: Posting your version of the program would accomplish that, would it not? As we said, the algorithm is what matters, not the implementation. In any case, Weasel Evolution http://www.zachriel.com/weasel/ You can adjust the population size and the mutation rate. Notice that we track fitness setbacks and reversions. Here's the central bit of code (in VBA): Do While Best is not equal to Target Mother = Best ... For c = 1 To cx ... ... Child = Mother ... ... For t = 1 To tx ... ... ... If Rnd is less than MutRate Then ... ... ... ... Child = Left(Child, t - 1) & RandomLetter & Right(Child, tx - t) ... ... ... End If ... ... Next t ... If Fitness(Child) > Fitness(Best) Then Best = Child ... Next c ... collect data ... Loop (Had to change a few characters so it would display properly.) If the mutation rate is high or the population size is low, it will take longer to reach the target, if at all. The escape key will stop the program if it doesn't look like it is able to reach the target. gpuccio: I am absolutely convinced that the nature of search spaces and of functions is such that, with the growth of the search space, the complex functional spaces become inevitably so sparse that they evade any random search. That can be already demonstrated for language. Random search, but not necessarily for evolutionary search. Indeed, if you have an oracle that can recognize meaningful phrases, then an evolutionary algorithm can find long meaningful phrases. That would only be true if they were connected within sequence space, no matter how rare they might be. Zachriel
Learned Hand: "Just to clarify, I think my underlying point was that your version of CSI differs from Dembski’s (inter alia) in that you admit the possibility of false positives." I certainly admit the logical possibility of false positives. But I am rather confident that they will never be empirically found. I am not sure that Dembski considers false positives logically excluded. If that is the case, I disagree. My argument is empirical, not logical. "it’s a smorgasbord" It is a fundamental concept, which can be approached in different ways. It's called science. gpuccio
Mung: Perhaps you’d like to post your own version of the program that eliminates the designed aspects? Zachriel: Be happy to, but first let’s resolve you misunderstanding of how the program works. Since you insist on being pedantic, where can I find a copy of Dawkins' original program? I'm attempting to make what I think are reasonable assumptions about the program. If there's a specific version of it out there somewhere you think I ought to be looking at I'd be more than happy to do so. You also as much as admitted that the population size was designed as well. Let's not let that one slip by either. Let's review: Mung: Dawkins designed a program in which the outcome was inevitable. Zachriel: Actually, it’s not inevitable. It depends on population size and mutation rate. And I'm the one playing word games? Are you claiming that in different runs of Dawkins program the mutation rate and population sizes were changed in such a way that the target phrase might never have been attained and Dawkins only chose to publish the one in which it did actually work? List of designed features: Population Size Mutation Rate "Genome" Length The character set The "fitness" function And probably more, but that's a start. Mung: Perhaps you’d like to post your own version of the program that eliminates the designed aspects? Zachriel: Be happy to, but first let’s resolve you misunderstanding of how the program works. Posting your version of the program would accomplish that, would it not? You could post it in increments. Mung
Also, you seem to subscribe to Dembski’s claim that CSI (whether of the Dembski, KF, gpuccio, or now Arrington flavors) cannot generate a false positive. But if the calculation is based explicitly on the state of your knowledge, won’t it generate a false positive whenever you incorrectly believe that no non-design origin is feasible?
The fact that the theory could be falsified by some false positive in the future (for example, by discovering hidden algorithms, at present completely unknown and unforeseeable, which can generate functional complexity) is indeed one of its merits: it simply means that it is a scientific theory, fully falsifiable empirically. Popper would be happy of that.
Just to clarify, I think my underlying point was that your version of CSI differs from Dembski's (inter alia) in that you admit the possibility of false positives. I think that makes it more realistic, if no more useful. And I think it's another strike against the concept of CSI as a coherent, cohesive, strong idea--the more we discuss it, the more it fragments into various different subspecies with different applications. Some are (supposedly) immune to false positives, some aren't, some are useful for design detection, some aren't, some include P(T|H), some don't.... it's a smorgasbord. Learned Hand
fifthmonarchyman: You have given a lot of interesting material, up to your post #711 where you give some detail of you game. I need some time to digest it all. Please, go on. It's very interesting. gpuccio
Zac said, On the one hand you say one-third is non-computable, then you say Shakespeare has computed it when he is subjectively satisfied with the precision. I say, No Shakespeare has not computed 1/3 he has "intuited" it after a calculation lasting long enough to satisfy his curiosity peace fifthmonarchyman
fifthmonarchyman at #651: Another interesting link! Thank you. :) gpuccio
fifthmonarchyman at #643: "When the Kolmogorov complexity in the algorithm exceeds the complexity in the original string you can infer a design." Exactly. In my dFSCI procedure, I say that we must use the Kolmogorov complexity only if the outcome we observe can be explained by an algorithm which is simpler than the outcome itself. Then, the algorithm can be considered as a "compression" of the outcome: IOWs, it is easier for the algorithm to be generated by a random search than for the outcome. But, if the algorithm has still sufficient functional complexity, we can infer design just the same: for the algorithm, if not for the outcome. I usually illustrate that with the example of an algorithm which can compute the decimal digits of pi. Now, the algorithm is certainly more complex than, say, 3.14. But, if we have the first n digits of pi, and n is very big, maybe the algorithm then is simpler than the output. In that case, the true complexity of the output becomes the complexity of the algorithm, IOWs we take the Kolmogorov complexity instead of the "direct" complexity. gpuccio
Me_think says What is the ‘game’ you keep referring to ? I say I know it's an extremely long thread and no one could be expected at this late date to go back through it all. so here goes At a very high level I modified the game described here http://arxiv.org/abs/1002.4592 to take any data string we choose to plug in to it. the process is as follows 1) represent a designed artifact numerically 2) Write an algorithm to reproduce that numeric representation sufficiently enough to fool an observer infallibly. With out borrowing information from the original string 3) plug it in and test to see if you succeeded Right now the game exists only as a crudely coded excel sheet. But repeated trials with different data strings shows it works and supports my position so far. I hoping that a programer friend will help me to convert it to a shareable app. stay tuned peace fifthmonarchyman
fifthmonarchyman at #639: Interesting link! gpuccio
fifthmonarchymanNovember at #631:
gpuccio, I have a practical question about function as a specification. I run a string that represents production throughputs from an assembly line through my game. I discover a complex pattern with a low probability. Later I discover that the patten is caused by the ebb and flow of production surrounding break times. Production goes up at times when folks are focused on the machine and down when they are not. Leaving aside the question of whether the pattern meets the complexity threshold. Is the specification of fluctuations related to attention functional? Is it post hoc? peace
Not sure that I understand the details of your game. I will see if you give them in the following posts. :) gpuccio
fifthmonarchyman: Yes and the things I forget are not a part of me. In other words I’m still the exact same unitary me after I forget and before I had the experience in the first place. Again that is contrary to your "proof" which is based on lossless integration of information. fifthmonarchyman: Are you really so sure that the authors are so blatantly stupid? We never said they were stupid. We said they made assumptions contrary to known understanding of human cognition. fifthmonarchyman: Is it even possible that you are misunderstanding the paper? We know how to read a proof, but sure, we could be mistaken. But you haven't shown how. Indeed, your comments seem in contradiction to the paper. fifthmonarchyman: My definition is merely a more constrained version of the mathematical definition. No, it's not more constrained, but less constrained. Furthermore, it's not the same usage as in the very paper you cite as justification. Misusing an existing term just leads to confusion. fifthmonarchyman: The standard definition leaves room for the oracle as part of the algorithm. For clarity I simply remove that possibility No, you do far more than that. You make one-third "non-computable", which has nothing to do with hypercomputation. fifthmonarchyman: If you assume immortality he can Then so can a Turing machine. fifthmonarchyman: Seriously Shakespeare precisely because he is not-computable can decide that he has calculated long enough for his own subjective purposes. Computable means we can calculate an arbitrary nth digit in finite time. One-third and pi are both computable numbers. (There's an interesting way to calculate arbitrary digits of pi without calculating the full expansion of pi based on chaos theory. See Bailey, Borwein & Plouffe, On the rapid computation of various polylogarithmic constants, Mathematics of Computation 1997.) fifthmonarchyman: If you think you can produce these things algorithmically have at it I’ll plug it into my game and see if it fools the observer infallibly. You haven't provided a coherent test. On the one hand you say one-third is non-computable, then you say Shakespeare has computed it when he is subjectively satisfied with the precision. Zachriel
Learned Hand at #603:
Also, you seem to subscribe to Dembski’s claim that CSI (whether of the Dembski, KF, gpuccio, or now Arrington flavors) cannot generate a false positive. But if the calculation is based explicitly on the state of your knowledge, won’t it generate a false positive whenever you incorrectly believe that no non-design origin is feasible?
The fact that the theory could be falsified by some false positive in the future (for example, by discovering hidden algorithms, at present completely unknown and unforeseeable, which can generate functional complexity) is indeed one of its merits: it simply means that it is a scientific theory, fully falsifiable empirically. Popper would be happy of that. For now, as far as no such falsification has ever been offered, it remains the best explanation in town. Moreover, I am absolutely convinced that the nature of search spaces and of functions is such that, with the growth of the search space, the complex functional spaces become inevitably so sparse that they evade any random search. That can be already demonstrated for language. For the protein space, there are still difficulties because there are many things about which we have not enough information, but I am confident that the situation is the same. Moreover, I am absolutely convinced that no algorithm (simpler than the output) can generate both descriptive information (language) and prescriptive information (function) which is really complex. I am confident that this too will be demonstrated for all search spaces and functional spaces. In the meantime, in the absence of any falsification, the empirical validity of the procedure to detect design wherever it can be validated remains true. IOWs, for complex functional information design remains the best explanation in town. gpuccio
Dionisio: I am still not sure that I understand exactly Gary S. Gaulin's position. I have downloaded his 40+ pages document from his site, but I could not yet read it all (it is not an easy read). However, while I find some of his concepts interesting, for example the concepts of different "levels" of intelligence, you know what my position is: no consciousness = no intelligence and no design. gpuccio
I’m fairly certain we will never come to an agreement on this philosophically, You just don’t have a place in your worldview for the Platonic forms that exist outside the cave.
If I may butt in, Yes, It is philosophical, so I don't see what you mean by computing something philosophical. What is the 'game' you keep referring to ? Me_Think
fifthmonarchyman, Dionisio and all the others: I apologize, but the discussion about post-specifications, and my little free time, have prevented me from following with the due attention the parallel debate here, which is extremely interesting. I will try to catch up, as well as I can. :) gpuccio
ZAc said, Yet, you keep forgetting things, even things from your long term memory. I say, Yes and the things I forget are not a part of me. In other words I'm still the exact same unitary me after I forget and before I had the experience in the first place. You say, Everything we know about cognition indicates that the assumptions in the paper you cite are faulty. I say, Are you really so sure that the authors are so blatantly stupid? Is it even possible that you are misunderstanding the paper? You say, And your misuse of the term non-computable only adds to your confusion. I say, My definition is merely a more constrained version of the mathematical definition. The standard definition leaves room for the oracle as part of the algorithm. For clarity I simply remove that possibility you say Shakespeare can’t calculate one-third to the last decimal place either. I say, If you assume immortality he can ;-) Seriously Shakespeare precisely because he is not-computable can decide that he has calculated long enough for his own subjective purposes. Zac I'm fairly certain we will never come to an agreement on this philosophically, You just don't have a place in your worldview for the Platonic forms that exist outside the cave. The good news is we now after 2500 years have a way forward. If you think you can produce these things algorithmically have at it I'll plug it into my game and see if it fools the observer infallibly. Or if you like have a go at gpuccio's mirror image challenge. let the science begin peace fifthmonarchyman
fifthmonarchyman: Yes, and the experience that is lost is not integrated into the information that is the unitary consciousness known as Fifthmonarchyman!!!! Yet, you keep forgetting things, even things from your long term memory. Everything we know about cognition indicates that the assumptions in the paper you cite are faulty. And your misuse of the term non-computable only adds to your confusion. Zachriel
Yes, and the experience that is lost is not integrated into the information that is the unitary consciousness known as Fifthmonarchyman!!!!
There is short term and long term memory, so even memories which are stored need not be stored for long. Is there a short term integrated information and long term integrated information ? Me_Think
ZAc says. There is a huge amount of loss between your experience and your consciousness. I say, Yes, and the experience that is lost is not integrated into the information that is the unitary consciousness known as Fifthmonarchyman!!!! More later, peace fifthmonarchyman
fifthmonarchyman: exactly! an algorithm can not compute 1/3 exactly. You can build an algorithm to approximate 1/3 but it will grind on forever with out a non-computable Turing Oracle directly or indirectly yelling STOP here. So most of the rational numbers are not "computable". That's not what the word means, and it's not how it is used in the paper you cited. fifthmonarchyman: That is exactly the reason why your Shakespeare emulator is doomed. Shakespeare can't calculate one-third to the last decimal place either. fifthmonarchyman: No the paper assumes lossless information ingratiation. My memory may be lossy my conscious unitary self is lossless. Lossless information integration means lossless information. The input is preserved in its entirety. fifthmonarchyman: Do you really think the paper would assume something that is obviously false or is it possible that you have misunderstood it? They are making a claim that is demonstrably contrary to how human consciousness works. fifthmonarchyman: The paper is not about how people learn it is about how consciousness arises (information is integrated). The paper is clear. When they say “the knowledge of m(z) does not help to describe m(z’), when z and z’ are close”, they are making a claim that is demonstrably contrary to how human consciousness works. fifthmonarchyman: Actually they define it as lossless integrated information to be precise. Which is demonstrably contrary to how human consciousness works. fifthmonarchyman: According to the paper I am simply the sum total of all my experiences and proclivities compressed losslessly into a unified conscious whole. Which is clearly not the case. There is a huge amount of loss between your experience and your consciousness. Zachriel
Zac Maybe another illustration will help. Suppose I build a Zachriel emulator that has all your memories and proclivities combined in a grand but lossy algorithm. Would I be ethically justified if I decided to deactivate the supposedly redundant original with a bullet to the brain? Think about it peace fifthmonarchyman
ZAC says By your definition, one-third is non-computable. I say, exactly! an algorithm can not compute 1/3 exactly. You can build an algorithm to approximate 1/3 but it will grind on forever with out a non-computable Turing Oracle directly or indirectly yelling STOP here. That is exactly the reason why your Shakespeare emulator is doomed. you say, The paper assumes lossless memory, which is contrary to fact. I say, No the paper assumes lossless information ingratiation. My memory may be lossy my conscious unitary self is lossless. That description precisely conform to my own subjective experience. You may have a different experience but I doubt it. Think about it for a minute Zac. Do you really think the paper would assume something that is obviously false or is it possible that you have misunderstood it? YOu say, It also assume “the knowledge of m(z) does not help to describe m(z’), when z and z’ are close”, which is directly contrary to how people learn. I say The paper is not about how people learn it is about how consciousness arises (information is integrated). Please try to grasp this it is a powerful insight. zac says, They define consciousness as integrated information. I say, Actually they define it as lossless integrated information to be precise. According to the paper I am simply the sum total of all my experiences and proclivities compressed losslessly into a unified conscious whole. Can you grasp that? peace fifthmonarchyman
A targeted search is an intelligent design mechanism. Zachriel doesn't understand this and that causes problems with the discussion. Joe
Z: Selections are made from the random mutations. To be more precise, selection is made on the entire string only, with strings more closely resembling the target making it to the next generation. Zachriel
fifthmonarchyman: by “not computable” I mean not able to be produced by finite Turing machine in a finite time. That's not what it means. It means you can't compute an arbitrary nth digit in finite time. By your definition, one-third is non-computable. fifthmonarchyman: please don’t feign obtuseness. Please directly address the points we raised. The paper assumes lossless memory, which is contrary to fact. It also assume "the knowledge of m(z) does not help to describe m(z’), when z and z’ are close", which is directly contrary to how people learn. fifthmonarchyman: the title of the paper is “Is Consciousness Computable” not “Is human memory computable?” They define consciousness as integrated information. Zachriel: The mutations are not guided, but random. Mung: The two are mutually exclusive? By the usual definition. Mung: The mutations are guided, and you already admitted as such. Are you playing with words? The mutations are random. Selections are made from the random mutations. Mung: the mutations are guided in that they are selected from a specified set and inserted as a replacement at a specified location. That's not quite correct. Mutations are not inserted at specified locations, rather mutations can occur at any position, even those positions which already have the correct letter. Mung: Perhaps you’d like to post your own version of the program that eliminates the designed aspects? Be happy to, but first let's resolve you misunderstanding of how the program works. Zachriel
gpuccio, Please, can you help me to understand the interesting concepts Gary S. Gaulin wrote to me? See post #690. Thank you. I noticed you have had a very interesting discussion with him, so perhaps you understand his point? I looked for the term "molecular intelligence" as per Gary's suggestion, but did not find the answer to my questions in post #690. However, maybe I looked in the wrong literature? You seem to have developed very practical skills to explain difficult things in easy to understand terms. Can you explain how this concept of 'molecular intelligence' explains how so many proteins are made available in the right places, in the required amounts, at the right time, for so many biological mechanisms, like the asymmetric mitotic (for example)? How are those mechanisms established and implemented to begin with? How did they evolve? Mile grazie mio caro Dottore! :) Dionisio
gpuccio, Wow, Your challenge is the mirror image of my "game"! I'm not a professional mathematician but this sort of weird equivalence tells me we are on to something. peace fifthmonarchyman
DNA_Jock at #646: I apologize for not answering earlier. I was rather busy. You say: “No. The wall (i.e. the set of all possible sequences, and their corresponding functions) existed before the bullet struck. ALL targets are human constructs.” Emphasis mine. So, you are admitting that the functions exist before. How we define them is a human construct. Well, that is exactly my point. You prefer to call "target" the human definition. I prefer to call "target" the existing function. What's the difference? The important point is: The target which already existed (the function" limits and constrains the human target (the definition of the function and of its level). That is my point. And that is where you are wrong. We can choose different "function-targets". We can define different functions which already existed. But we cannot invent a function. And we can define different thresholds for a function. But, as I said, we are limited by the observed effect size. We cannot make out threshold so high that the observed effect size is no more in the target. That is the foundation of the common practice of defining the rejection region for an effect size "at least x", where x is the observed effect size. IOWs, if we are defining a "definition-target" for a "function-target", and if the "function-target" is made of concentric circles, we can choose the threshold as "the three innermost circles" only if the hit is in them, otherwise we must extend out target so that it includes the hit, and therefore we make it bigger. That's why your point that we can make our target "arbitrarily small" is simply wrong. IOWs, we can choose different "function-targets" for our "definition-target", but we are constrained by the existing "function-targets". And we can choose arbitrary thresholds in our definition, but we are constrained by the observed effect size. Now, please look at my post #641:
1) How do we choose the function? It’s simple. Provided that a) is satisfied (the function was already existing and was definable even before the observation), we can define ant function we like (obviously, we will chose some function which is implemented by the observed object). There is no problem here. Why? Because in my dFSCI procedure I have explicitly said that any observer can define any function for any object, including many different functions for the same object. dFSCI is computed for the individual defined function. Any function implying dFSCI allows to infer design, if observed in an object. IOWs, it is sufficient to be able to find one function which, explicitly defined, implies dFSCI to infer design. So, we have complete liberty in choosing the target, because the target itself is not so important: what is important is the complexity linked to it. 2) How do we define the level of function necessary to consider it present? That is simple too. It’s what we always do in Fisherian inference. The level is: “at least the level of function observed in the object”.
So, it is true that we can choose different "function-targets", but it is equally true that what we are looking for is any target which has extremely high functional complexity. Any such target is fine. The important points are: a) It must correctly describe an existing "function-target". b) It must have extremely high functional complexity. And, if we use the general principle of defining the threshold as "at least x", where x is the observed effect size, we have no arbitrary choice in the definition of the threshold: we are completely constrained. So, your point that we can make any target and make it arbitrarily small is simply a fallacy. I will let to you to find a good name for it. :) If it were not a fallacy, it should be extremely easy to find false positives for my procedure: just paint a target on a random outcome, and make it arbitrarily small (see my final challenge). You say: "Example # 2 is a killer: Adenyl kinase catalyzes the reaction : 2ADP AMP + ATP It does NOT catalyze the reaction : 2 ADP 2 AMP + 2P If it did, that would be fatal. Literally." OK, so you have two different "definition-targets": a) Any enzyme which catalyzes the reaction : 2ADP AMP + ATP b) Any enzyme which catalyzes the reaction : 2ADP AMP + ATP and does NOT catalyze the reaction : 2 ADP 2 AMP + 2P The second definition is obviously a subset of the first, and it has explicit functional motivations (as you say, in a biological context, a) would include fatal objects). Please, note that in neither definition do we need any "range of Km’s" derived from the contingency of a specific molecule. They are independent functional definitions, both perfectly valid. The correct questions are: 1) Which is more useful in our model? The easy answer is b), for the reasons you yourself described. 2) Where is our hit? It is certainly in a), but is it also in b)? Yes, it is. So, we can certainly use b) as a good definition. But even if we had used a), and computed an extremely high dFSCI, a design inference could have been made just the same. Because, to be part of b), a molecule must be part of a). Just as a sequence has to be made of English words in order to have good meaning in English. The problem of ATP synthase requires a longer discussion, and I will not discuss it in detail in this post for brevity. Just a hint: my original argument was about two chains which are part of a bigger molecule. Those two chains, in themselves, have no ATP synthase activity. So, I had not detailed really well my "function definition", because at that level of discussion my focus was mainly on the "conservation/complexity" aspect. There is a problem of Irreducible Complexity here. Now, the final challenge. Let's go back to my original example of language. After all, your objections are methodological, and if they are true they must apply to language too. So, a challenge to you, in two parts. 1) Here is a 600 character sequence, which I have just generated by an online random character generator (no idea how good, truly random, it is, but I think it will do). So, here is the sequence: l.qvff..stscilrriegakbb oprzbdfbnguio.h odjjsvamrcxly mlbtihxqotillxqtifwfyalxc,vbjckobzdrjvyo.oo ,evitbhnwhyixjmyakripxjrylxcqebyeuprpipd,.yvtfbrl,qqqcuqqsmviuonqeyx eeyumkx, igzelxs hqpyriinyflyvpvblcrvbiljnk edhcnvycmikfwa,ghwuxspycpwn.mbqrcbcr w,iiqhwsd.. wcfn wuntehhj.y.sdweze.kjosyyobnsmryvw.xgyigvng nf cskcmguvl l d.eamqet.bgs,fyrcul.nq,xjexzhed.,zbigpdwssucer,ugavop.vowwz. cqmegaylpvj,khlfubz,ptt,wjbdgtuibuytprztqewhhadjhbu mssikwkqwqucxbzzqs kbjbnikehnviqdykgmjwyllhyasivg uexccpbcyowyv.vgladhihjnytzd ujnmoypvu,,blvymbxaxpx.jaoe,y.whwmib.nbfmrcsbpm,asyqgqdegs,fejv,jtu.cl i.grn qfsicb.w Now, I ask you to: 1a) Define any "definition-target" you like for that sequence. Please remember, you must not use the specific contingency in the sequence (the specific characters). 2a) Make it arbitrarily small, so that the result has extremely high functional complexity (1000 bits will do). OK? 2) Second part of the challenge. Maybe you are not happy with my sequence. Maybe it is one of the rare sequences for which it is difficult to paint a target and make it arbitrarily small. So, I will give you complete freedom in the second part of my challenge: Please, show me any sequence generated in a truly random way, for which you can paint a definition-target and make it arbitrarily small, so that it exhibits 1000 bits of functional information according to the defined function. OK? Good luck. A final disclosure: my sequence is really random (provided that the internet generator worked well). I did not design it in any way. I took the first one which came. I just decided the set of characters, including space, comma and period, and the length of the sequence (600 characters). No other intervention. So, if you conclude by my procedure that it is a negative, it will be a true negative. For a true positive, I maintain the Shakespeare sonnet. Or any post of sufficient length in this thread. gpuccio
668 Gary S. Gaulin I apologize for misspelling your name. My mistake. Thank you for the explanation. You wrote:
That is all a part of memory Addressing, which in a cell nucleus is arranged in (look up online and see theory for Molecular Intelligence) “chromosome territories” that connect from one to another in a circuit where sensory molecules flow from place to place (not diffuse through entire cell as was once thought). This is caused by complex self-organization, from the way individual molecules always self-assemble according to surrounding conditions.
I still don't understand how we got these biological systems where different proteins are in the right location, in the right amount, at the right time, for so many different situations. Do you have any example of a step-by-step description of the process that created the complex mechanism researchers observe these days? Thank you. Dionisio
Zachriel:
The mutations are not guided, but random.
The two are mutually exclusive? Zachriel:
The mutations are not guided, but random.
The mutations are guided, and you already admitted as such. Here's what you wrote:
Actually, it’s not inevitable. It depends on population size and mutation rate.
In addition to mutation rate, the mutations are guided in that they are selected from a specified set and inserted as a replacement at a specified location. Perhaps you'd like to post your own version of the program that eliminates the designed aspects? Mung
Zac says That is not how human memory works. I say the title of the paper is "Is Consciousness Computable" not "Is human memory computable?" Here comes the frustration again. Break time Peace fifthmonarchyman
Zac says Pi is computable. Again for probably the 7th time by "not computable" I mean not able to be produced by finite Turing machine in a finite time. You can use an algorithm to calculate Pi but the program will never halt. You said, Shakespeare yesterday has different memories than Shakespeare today. I say, Yes but he is the very same Shakespeare yesterday and today. To quote gpuccio yet again. Consciousness is unitary, because the I which perceives is always the same subject. The things it perceives vary a lot, but it is the same subject who perceives them. You say, That is directly contrary to the proof you cited. Did you read the paper? I say, please don't feign obtuseness. What I'm saying is perfectly consistent with the paper's proof that you don't see that might be due to my poor explanation skills but I doubt it peace fifthmonarchyman
Er. Pi is computable. Zachriel
fifthmonarchyman: Circles are real things and they are not computable Huh? Do you mean pi? Pi is computable? Zachriel
fifthmonarchyman: Suppose you developed a emulator that has all of Shakespeare’s memories except one. Would the emulator be Shakespeare? Of course not that is the point the authors are trying to make. Shakespeare yesterday has different memories than Shakespeare today. We won't say fewer because human memory is not lossless, as assumed in the paper you cited which defined an integrating function as “the knowledge of m(z) does not help to describe m(z'), when z and z' are close”. That is not how human memory works. fifthmonarchyman: Quite the contrary it is the proof!!!! That is directly contrary to the proof you cited. Did you read the paper? Zachriel
Zac said, They define integrating function such that “the knowledge of m(z) does not help to describe m(z?), when z and z? are close”, which is exactly contrary to how people learn and developing understanding. I say, The paper is discussing unitary consciousness, the "I" at the center of it all. Suppose you developed a emulator that has all of Shakespeare's memories except one. Would the emulator be Shakespeare? Of course not that is the point the authors are trying to make. This contention is very intuitive We all know it to be true!! You say, When people look at something circular, they don’t memorize every detail. They note it is a circle, and take note of a few distinctions. They generalize it, which defeats the proof the paper provides. I say, Quite the contrary it is the proof!!!! If you show me an object that is generally circular and claim it is an "ideal" circle I will laugh and point out all the things in the object that don't conform to what a circle is. "Ideal" Circles are real things and they are not computable, That is the point peace fifthmonarchyman
Mung: It shows that guided mutation plus guided selection is not equivalent to a random search. The mutations are not guided, but random. Not sure that adding "guided" to selection adds anything. The strings are selected based on their fit to the target. If we scrambled all the letters each time, we would also be looking for a fit to the target. The difference between the two algorithms is what is of interest. Zachriel
Zachriel, It shows that guided mutation plus guided selection is not equivalent to a random search. Whoever thought otherwise? Mung
fifthmonarchyman: If a thing is not computable it necessarily contains infinite Kolmogorov complexity. We already pointed out several problems with their proofs. They define integrating function such that "the knowledge of m(z) does not help to describe m(z?), when z and z? are close", which is exactly contrary to how people learn and developing understanding. "An integrating function’s output is such that the information of its two (or more) inputs is completely integrated." But we know from simple observation that people integrate information incompletely. In other words, information is always lost during the process of learning. People integrate new knowledge within the parameters of what they already know. fifthmonarchyman: It is a safe bet that we will never agree philosophically after all Plato wrote about this almost 2500 years ago and there are still people in the cave who aren’t convinced that the forms actually exist. Good example. When people look at something circular, they don't memorize every detail. They note it is a circle, and take note of a few distinctions. They generalize it, which defeats the proof the paper provides. Zachriel
Zac says, Nor do we see where it claims infinite information. I say If a thing is not computable it necessarily contains infinite Kolmogorov complexity. This is not rocket science. Zac says, integration does not imply perfect integration. I say, It is a safe bet that we will never agree philosophically after all Plato wrote about this almost 2500 years ago and there are still people in the cave who aren't convinced that the forms actually exist. The cool thing is that ID and IIT have provided a scientific way forward. If you believe that IC can be computed all you have to do is program an algorithm plug it into my "game" and see if it infallibly fools the observer. let the science begin peace fifthmonarchyman
fifthmonarchyman: I concede that if unitary consciousnesses (or any IC) does not in fact exist then it is no problem for algorithms. That wasn't the point we raised. Nor do we see where it claims infinite information. fifthmonarchyman: But we all know that unitary consciousness (IC)exists!!!! Neurologists are not so sure as to the degree of integration. fifthmonarchyman: Consciousness is unitary, because the I which perceives is always the same subject. 'No man ever steps in the same river twice, for it's not the same river and he's not the same man.' — Heraclitus In any case, integration does not imply perfect integration. The proofs provided assume perfection in integration and memory, both of which are contradicted by the evidence. Zachriel
ZAC, I think we have already covered this. I concede that if unitary consciousnesses (or any IC) does not in fact exist then it is no problem for algorithms. But we all know that unitary consciousness (IC)exists!!!! gpuccio had a knock out argument way back at 543 it is worth re-posting in full. quote: Consciousness is unitary, because the I which perceives is always the same subject. The things it perceives vary a lot, but it is the same subject who perceives them. Reality check: would you be indifferent if you could know in advance that in 3 years you will suffer? No. Because you know well that it will be you to suffer. It’s not important that in the meantime your personality could be different, that you can forget many things that are important for you today, and so on. You know that it is you who will be ther. The same subject. On the other hand, we are all too ready to be indifferent to the suffering of perfect strangers (too much, I would say). If consciousness were only a bunch of information which constantly changes, that unity of the I, which is the reason itself of all that we do, would make no sense. end quote: If that does not convince you I have other arguments let me know peace fifthmonarchyman
fifthmonarchyman: http://arxiv.org/abs/1405.0126 Thanks. Several problems with the assumptions in the paper. First, they assume memory is lossless, which, gee whiz, is obviously not the case. Second, while there is certainly integration in how the brain works, it's not perfect integration. With regards to compression; even given perfect compression, it doesn't mean new information can't be integrated. You don't use surgery, you use learning. Furthermore, any finite digital system is computable as there is a limit to the number of possible states. However, an analog neural net can exhibit non-computable behavior if its parameters are given infinite precision. Nothing magical about that whatsoever. Here's a simple model of the brain that is computable. 1. Think about a problem. 2. If you find a solution, utter it. 3. If you don't find a solution, make an educated guess. 4. If the guess turns out wrong, say "Oops!" End. Zachriel
zac asks Huh? How do you prove that? If the proof is somewhere in the 600+ comments, a link will do. Thanks. I answer, http://arxiv.org/abs/1405.0126 Your welcome ;-) Peace fifthmonarchyman
fifthmonarchyman: Of course all the while I would be content with the notion that no mater how low the Kolmogorov complexity in the individual parts of an IC system turn out to be the complexity of the whole is still mathematically proven to be infinite. Huh? How do you prove that? If the proof is somewhere in the 600+ comments, a link will do. Thanks. Mung: Dawkins designed a program in which the outcome was inevitable. Actually, it's not inevitable. It depends on population size and mutation rate. Mung: But it was not designed to show how real evolution works. So what was the point? It shows that mutation plus selection is not equivalent to a random search. Zachriel
Me_Think:
He [Dawkins] wrote a program to generate the sentence from alphabets and space. It took 40 generations to get the sentence by ‘Natural Selection’ algorithm.
Dawkins designed a program in which the outcome was inevitable. Whether the algorithm he employed was "a natural selection algorithm" is disputed. Why did he not choose a more complex phrase? Me_Think:
He [Dawkins] also noted that the “experiment is not intended to show how real evolution works, but does illustrate the advantages gained by a selection mechanism in an evolutionary process”. He didn’t say his program detects design.
The experiment was intended. It was designed. But it was not designed to show how real evolution works. So what was the point? Me_Think:
He [Dawkins] also noted that the “experiment is not intended to show how real evolution works, but does illustrate the advantages gained by a DESIGNED selection mechanism in a DESIGNED evolutionary process”.
Fixed it for you. Me_Think:
He [Dawkins] also noted that the “experiment is not intended to show how real evolution works, but does illustrate the advantages gained by a selection mechanism in an evolutionary process”. He didn’t say his program detects design.
The point Dawkins is trying to make is that "unguided evolution" can produce "the appearance of design." It follows that there is a way to detect design. Dawkins set the "design threshold" incredibly low. Not credible. Mung
Joe:
CSI is Shannon-like.
Since the memory system of the model I use is based on a binary RAM chip: Shannon works fine for final numbers. One thing to remember though is RAM Data actions are amino acid sequences, not 2 bit per letter A,C,G & T bases that can be (sensory molecule) controlled to produce a variety of products. Also the codons may encode Confidence levels this way. Larry Moran conveniently provided the data needed to help account for each DNA-RAM Data location needing to store both a gene region confidence level and protein sequence, otherwise the system would not be intelligent and control when guesses are needed and taken (as in the Darwinian model): http://sandwalk.blogspot.com/2014/08/another-stupid-prediction-by.html?showComment=1409173698672#c7730844542336163086 Gary S. Gaulin
DNA_Jock said which makes your subtracting one Kolmogorov complexity from another challenging, to say the least. I say, Not sure what you are getting at here. It must be my "slim connection to reality" ;-). I would think that the contingency of my measure would make my approach more scientific than other methods. I could posit a hypothesis that the bacterial flagellum exhibited at least (X bits) + Phi information. You could counter that it actually contained (X bits-1) + Phi. We could then test our various claims. To falsify my hypothesis you would just have to produce a simpler algorithm, If you were feeling really bold you could challenge the claim that the BF contained integrated information (Phi) at all. Both sides would be happy. Bring on the science!!!! Of course all the while I would be content with the notion that no mater how low the Kolmogorov complexity in the individual parts of an IC system turn out to be the complexity of the whole is still mathematically proven to be infinite. ;-) The main reason that measuring the information in the individual pieces is important is because it's possible that very simple IC structures could come about by chance. If the individual parts have enough complexity we can rule out random processes. This is where gpuccio's metric might be a better way forward. Kolmogorov complexity in the individual parts or Functional complexity seem to be mysteriously equivalent. either one can serve as the C in CSI, On the other hand Phi is a better S. I'm open to correction though. Peace fifthmonarchyman
CSI is Shannon-like. Joe
fifthmonarchyman:
my calculation is forvever contingent on whether I have actually found the shortest algorithm for each part when the fact is we may never know if there is a shorter one out there somewhere
That is my understanding too, which makes your subtracting one Kolmogorov complexity from another challenging, to say the least. Phi looks to me like a Shannon-like measure of information, while CSI is, err...Bueller? Bueller? DNA_Jock
Dionisio:
Gary S. Gatlin and fifthmonarchyman Have read some of your interesting comments. Thank you for writing them. When you take a break from your ongoing discussion on the measuring of the dFSCI of individual strings, could you comment on how to explain that the different proteins have to be at the right time, in the appropriate location, in the correct amounts? And that there are gazillion different combinations of those protein stories, scenarios, choreographies and orchestrations? And the proven fact that minor variations of the arrangements may cause defects and problems? How can we explain all that? How can we measure it? Any idea? Thank you! :)
Thanks for the compliment, Dionisio. Close enough on the spelling! That is all a part of memory Addressing, which in a cell nucleus is arranged in (look up online and see theory for Molecular Intelligence) "chromosome territories" that connect from one to another in a circuit where sensory molecules flow from place to place (not diffuse through entire cell as was once thought). This is caused by complex self-organization, from the way individual molecules always self-assemble according to surrounding conditions. To go into more detail than that right now I will need a break from my day job. But I will have more time this weekend, in case you have more questions. That was a good one. Gary S. Gaulin
Dang I keep forgetting stuff my calculation is forvever contingent on whether I have actually found the shortest algorithm for each part when the fact is we may never know if there is a shorter one out there somewhere What we do know however is that the total Phi/CSI/Kolmogorov complexity is infinite no matter what the complexity of the individual parts turns out to be peace fifthmonarchyman
Me-thinks says Interesting can you show how you calculate the flagellum complexity ? Do you convert CSI to Kolmogorov complexity ? I say, That is what I'm here for. I'm trying to get the calculation straight in my head. So far I 1) Represent a artifact numerically 2) build an algorithm for each individual part 3) measure the Kolmogorov complexity in each of those algorithms. That only gives me a low ball number X. Now I can say the Phi/CSI/Kolmogorov complexity in the artifact is greater than X but that is as far as I can take it right now. I'm definitely enjoying the challenge though. It's been tons of fun peace fifthmonarchyman
Let me give another example Take the conscious individual known as Shakespeare. Suppose I deconstruct all of his memories and proclivities and measure the information content in each of those separately I will have a finite amount of information. Next suppose I add all those numbers together. The information in Shakespeare is greater still because His consciousness is unitary. It turns out that that additional information is infinite when measured as Kolmogorov complexity. Shakespeare Is IC/He contains integrated information Phi. That is why Zac's Shakespeare emulator is doomed. You can't compute Shakespeare or the bacterial flagellum algorithmically. Now in one sense Shakespeare has more integrated information than the bacterial flagellum but this simply because he contains more parts. In another profound sense both have the same Phi (infinity plus 7) is equivalent to (infinity plus 7 million). I know I'm belaboring this point but this is a big deal and I'm hoping you will understand me Peace fifthmonarchyman
fifthmonarchyman @ 663
If I disassemble each of it’s various parts and measure the complexity of each part individually I find I have a finite amount of information present in each one. If I add up the total information in each of those parts the bacterial flagellum as a whole has more information.That is what we mean when we say that it is IC It turns out that this quantity of additional information (phi) is infinite when measured as Kolmogorov complexity.
Interesting can you show how you calculate the flagellum complexity ? Do you convert CSI to Kolmogorov complexity ? Me_Think
Me thinks said I am lost. How can a reminder be infinite if you are subtracting from a finite number? I say who said the complexity in the original string was finite? The whole point of my "game" is to show that the original string contains infinite Kolmogorov complexity. Each of the components of the string on the other hand contain a finite amount of Kolmogorov complexity. If you subtract these finite amounts of complexity from the infinite complexity in the whole you are still left with infinite complexity. That is precisely what I mean when I say the string in noncomputable I hope that helps. Let me use a practical example to illustrate Take the bacterial flagellum. If I disassemble each of it's various parts and measure the complexity of each part individually I find I have a finite amount of information present in each one. If I add up the total information in each of those parts the bacterial flagellum as a whole has more information.That is what we mean when we say that it is IC It turns out that this quantity of additional information (phi) is infinite when measured as Kolmogorov complexity. This is a profound insight. Please think it over and let me know if I need to clarify further Peace fifthmonarchyman
5th @ 661 I am lost. How can a reminder be infinite if you are subtracting from a finite number? Me_Think
DNA_Jock said I’m sorry 5mm, but your connection to reality is slim, at best. I say Understood, Sorry you feel that way. Hope you find the sanity of others here more to your liking I appreciate your patience. Me_Think says, If Phi doesn’t differ with components of a system, what is the point of measuring it in different systems? I say, You don't measure Phi in different systems. You measure the Kolmogorov complexity in the individual parts of a system and subtract that from the total. If the remainder is infinite you have an IC system (Phi). At least that is what I'm attempting to do. peace fifthmonarchyman
"Let's call the whole thing off" I'm sorry 5mm, but your connection to reality is slim, at best. :) DNA_Jock
fifthmonarchyman @ 655,
[this is for context] Phi is a simply a constant that corresponds to the amount of information that exists in a system over and above the information contained in the sum of it’s components. [end context] I say, Yes apparently!!!! it equals infinite Kolmogorov complexity in all system that are IC.
If Phi doesn't differ with components of a system, what is the point of measuring it in different systems? Me_Think
probably just me fifthmonarchyman
keiths I personally find block quotes distracting and a hassle.
fifthmonarchyman
fifthmonarchyman, Why not use blockquote tags? It would make your comments easier to read. keith s
DNA_Jock pi isn’t a measurement of the ratio. It is the ratio. I say, When you phrase it like that Phi (integrated information) isn’t a measurement of the additional information. It is the additional information. you say, e isn’t a measurement, it’s the solution to the equation d/dx(e^x) = e^x I say, Phi isn’t a measurement it’s the solution to the equation Total Information -information in individual components= Phi You say, Making it a metric. It’s a constant? For all systems? Weird. I say, Yes apparently!!!! it equals infinite Kolmogorov complexity in all system that are IC. you say, As to what is and is not Turing-computable, color me uninterested. I say, Yet you are right now posting on a blog decated to the exact proposition that some things in nature are not reducible to algorithms like RM/NS. That seems odd peace fifthmonarchyman
fifthmonarchyman
It is a metric in the same sense as Pi is a measurement of the ratio of a circle’s circumference to its diameter and e measures the base rate of growth shared by all continually growing processes.
pi isn't a measurement of the ratio. It is the ratio. e isn't a measurement, it's the solution to the equation d/dx(e^x) = e^x
Phi is a simply a constant that corresponds to the amount of information that exists in a system over and above the information contained in the sum of it’s components.
Making it a metric. It's a constant? For all systems? Weird. As to what is and is not Turing-computable, color me uninterested. DNA_Jock
Hey DNA_Jock. Lets start with the fact that the paper that I have repeatedly linked to demonstrates that Phi is non-computable just like Pi and e in the sense that it can not be produced by a finite Turing machine ruining a finite length of time. you said, My understanding of Integrated Information is that it is a metric, so any specification of Phi would require a human-determined threshold. I say, It is a metric in the same sense as Pi is a measurement of the ratio of a circle's circumference to its diameter and e measures the base rate of growth shared by all continually growing processes. Phi is a simply a constant that corresponds to the amount of information that exists in a system over and above the information contained in the sum of it's components. Just a word of caution I'm only an interested observer not a participant in the discussions surrounding this developing theory peace fifthmonarchyman
fifthmonarchyman, My understanding of Integrated Information is that it is a metric, so any specification of Phi would require a human-determined threshold. But my understanding may be defective. Please make your argument that "Phi is exactly the same sort of thing as pi and e", and I'll do my best to assess the merits of the argument... DNA_Jock
from here http://en.wikipedia.org/wiki/Integrated_information_theory quote: In a system composed of connected "mechanisms" (nodes containing information and causally influencing other nodes), the information among them is said to be integrated if and to the extent that there is a greater amount of information in the repertoire of a whole system regarding its previous state than there is in the sum of the all the mechanisms' considered individually. In this way, integrated information does not increase by simply adding more mechanisms to a system if the mechanisms are independent of each other. end quote: We live in interesting times indeed peace fifthmonarchyman
When will I learn that you can't cut and paste Greek letters on UD? ;-) fifthmonarchyman
DNA_Jock The fact remains that ALL specifications are human constructs. As you have demonstrated. (As I noted before, I might make an exception for pi and e.) I say, Would you make an exception for Integrated Information Phi (?)? If not why not? There are good arguments to be made that Phi (?) is exactly the same sort of thing as pi and e. Phi (?), when it is not lossy is synonymous with Irreducible Complexity. You remember IC don't you it is the grandaddy of all specifications according to ID peace fifthmonarchyman
Gary S. Gatlin and fifthmonarchyman Have read some of your interesting comments. Thank you for writing them. When you take a break from your ongoing discussion on the measuring of the dFSCI of individual strings, could you comment on how to explain that the different proteins have to be at the right time, in the appropriate location, in the correct amounts? And that there are gazillion different combinations of those protein stories, scenarios, choreographies and orchestrations? And the proven fact that minor variations of the arrangements may cause defects and problems? How can we explain all that? How can we measure it? Any idea? Thank you! :) Dionisio
DNA Jock- unguided evolution cannot account for any ATP synthase. It cannot account for any complex protein structure. And we understand that bothers you. Joe
gpuccio wrote:
DNA_Jock:
“No. The wall (i.e. the set of all possible functions) existed before the bullet struck. ALL targets are human constructs.”
No. The wall is the set of all possible sequences (the search space). Each bullet is a new tested state. The targets are the functions in the search space.
No. You are making a distinction without a difference. As you have already noted (@627), “It is objectively true that a protein whith a certain AA sequence can act as an enzyme, It was true even before such a protein existed, because it is a necessary consequence of the laws of biochemistry.” So there is a fixed sequence-to-function mapping. And you have gone to great pains to make it clear that your specification refers to the functionality of the protein (as it must), and NOT its sequence (which we have agreed would be obviously invalid). So you are painting on the surface of functionality, you are defining your target in terms of function, NOT sequence. As you must. If you cannot see this, then I will happily re-state my objection to your original statement:
My point is that both the bullethole (the object we observe) and the target (its function) existed before our observation. The target (function) has not been “painted” around a bullethole which hit a homogeneous wall, where no target (no possible function) was present. The target existed before the bullethole hit it.
Thus:
“No. The wall (i.e. the set of all possible sequences, and their corresponding functions) existed before the bullet struck. ALL targets are human constructs.”
We agree that the section of wall existed before the bullet struck. You continue to claim, despite your own demonstration otherwise (“traditional” ATP synthase, “if the cousin won, I would expand the target”), that the target was pre-existing. This is obviously wrong. Biochemistry: You are calling foul on my specification for ATP synthase (the traditional one, btw) because I specified a range, rather than a simple inequality. But the reactions that an enzyme does NOT catalyze are quite as important as the reactions that it does catalyze: I will give two examples off the top of my head. Hexokinases exist in low-Km and high-Km forms (the high-Km form is referred to as “glucokinase”). Glucokinase’s “inefficiency” is essential to the regulation of catabolism in mammals. Any specification of glucokinase’s function MUST define a range of Km’s. Example # 2 is a killer: Adenyl kinase catalyzes the reaction : 2ADP < --- > AMP + ATP It does NOT catalyze the reaction : 2 ADP < --- > 2 AMP + 2P If it did, that would be fatal. Literally. I chose these examples because of their obviousness, but the same logic applies to the specification for every enzyme I can think of. The fact remains that ALL specifications are human constructs. As you have demonstrated. (As I noted before, I might make an exception for pi and e.)
But that is exactly what must not be done. He is using the contingency in a specific observed ATP synthase to define the function, which is eactly what we showed cannot be done, when we discussed a).
And it is also exactly what you did when you changed your specification from “ATP synthase” to “traditional ATP synthase”. The sad thing is that I don’t think you realize the equivalence. It is an unavoidable problem with all of your specifications. Bob is going to try to explain this to you. Gook luck, Bob. DNA_Jock
Me Think:
CSI is circular and is of little use as a design detector.
CSI is NOT circular and it works very well as a design detector. Only willfully ignorant people say that CSI is circular.
You need to eliminate all Darwinian and Natural procedures (The ‘H’ in CSI formula) before calculation
That is not the CSI formula, it is the specification formula. Also it is up to YOU and your ilk to provide H and you have failed, miserably and you want to blame us. That is beyond pathetic. Joe
fifthmonarchyman:
When the Kolmogorov complexity in the algorithm exceeds the complexity in the original string you can infer a design.
This Kolmogorov complexity is very similar to what I was thinking. Fascinating... Gary S. Gaulin
Me_think said Kolmogorov complexity is just descriptive complexity and more to do with computation resource. It would be difficult to equate CSI with it. I say, Remember my "game" is all about the length of algorithm required to mimic a particular string closely enough to fool an observer. That sort of thing is where Kolmogorov complexity shines. When the Kolmogorov complexity in the algorithm exceeds the complexity in the original string you can infer a design. Peace fifthmonarchyman
fifthmonarchyman @ 638
I measure the complexity in a different way (Kolmogorov complexity) but I am convinced that the two methods are equivalent for our purposes
Kolmogorov complexity is just descriptive complexity and more to do with computation resource. It would be difficult to equate CSI with it. Me_Think
Bob O'H: "I agree that the bullet didn’t hit a homogeneous wall, rather it struck a wall with lots of possible targets. The specification is the process of picking a target from the multitude of possible targets. The key issue is how you chose the target (i.e. the specification) that you do." Correct. So, now we can patt to b): the methodology. I will give at first only a brief summary, so you can offer your thoughts (and expected objections) abd we can go into more detail. The methodology I suggest is the following, and is specific of design detection by dFSCI. Its validity will be checked by empirical validation. 1) How do we choose the function? It's simple. Provided that a) is satisfied (the function was already existing and was definable even before the observation), we can define ant function we like (obviously, we will chose some function which is implemented by the observed object). There is no problem here. Why? Because in my dFSCI procedure I have explicitly said that any observer can define any function for any object, including many different functions for the same object. dFSCI is computed for the individual defined function. Any function implying dFSCI allows to infer design, if observed in an object. IOWs, it is sufficient to be able to find one function which, explicitly defined, implies dFSCI to infer design. So, we have complete liberty in choosing the target, because the target itself is not so important: what is important is the complexity linked to it. 2) How do we define the level of function necessary to consider it present? That is simple too. It's what we always do in Fisherian inference. The level is: "at least the level of function observed in the object". Now, DNA_Jock has siggested that we can make the target space "arbitrarily small" by manipulating the function definition. That is not true. He gave as an example: "ATP synthase having Km for Mg.ATP between 0.9e-4 and 1.1e-4 Ki for ADP between 2.8e-4 and 3.1e-4 Ks for Mg2+ having the following pH dependence: pH Ks 7.2 1e-4 7.3 0.9e-4 7.4 0.6e-4 7.5 0.4e-4 7.6 0.2e-4 These values at 25 C in 0.1M KCl." But that is exactly what must not be done. He is using the contingency in a specific observed ATP synthase to define the function, which is eactly what we showed cannot be done, when we discussed a). IOWs, if you observe an enzyme with a level of activit x, you can define the function as that activity, and the level as "at least x". You cannot define it as "from x-1 to x+1", because that is methodologically incorrect. The correct definition is "x or more", like in all the definitions of the effect size for a rejection region. Therefore, we cannot make the target space "arbitrarily small". Its lower threshold is already defined by the definition of the function and by the level observed in thje object: it is "at least x". If we want a smalle target space, we have to define the level as "x+y", and then our object would no more be part of it. This is just to begin. I will discuss the problem of "how many targets exist" later. gpuccio
DNA_Jock: "No. The wall (i.e. the set of all possible functions) existed before the bullet struck. ALL targets are human constructs." No. The wall is the set of all possible sequences (the search space). Each bullet is a new tested state. The targets are the functions in the search space. I suppose it's useless to go on with you, if you disagree with this. I will go on with Bob O'H, who seems to understand my point. gpuccio
Here is some math for the I if we were to move this calculation from words to audio tones http://plus.maths.org/content/how-many-melodies-are-there Now suppose we heard the theme from ET in a transmission from a distant planet we could calculate the CSI. On the other hand if the transmission from a distant planet matched a preexisting audio password we would be calculating the dFSCI peace fifthmonarchyman
Don't be sorry Silver Asiatic I'm just trying to get this all clear in my head. Now that gpuccio has shown a way forward with measuring the I in CSI It seems that the action moves to the S. I really like his emphasis on functionality as a way to specify but I have not given a lot of thought about how it plays out practically. That is the reason for the questions. We are all trying to feel our way forward here PS I measure the complexity in a different way (Kolmogorov complexity) but I am convinced that the two methods are equivalent for our purposes fifthmonarchyman
I have been using a non-technical (incorrect) notion of what CSI is, taking the term literally and not as it was intended. Sorry about that. Silver Asiatic
So a CSI calculation is always tentative?
CSI is circular and is of little use as a design detector. You need to eliminate all Darwinian and Natural procedures (The 'H' in CSI formula) before calculation. So it is not even tentative.
Suppose we were to find a Grammar gene that nudged utterances in a way that made “good English” more likely would that render gpuccio’s specification invalid?
Language and it's grammar are concepts created by us. There can't be a gene for concepts. gpuccio has calculated the probabilities. Although IDer claim Log of probability detects design,scientists are not convinced ( you certainly won't see CSI, dFSCI measures in any research paper). Me_Think
fifthmonarchyman
So a CSI calculation is always tentative? Suppose we were to find a Grammar gene that nudged utterances in a way that made “good English” more likely would that render gpuccio’s specification invalid?
I really don't know the answers there and I hope gpuccio will respond. Actually, I'm not sure how your questions followed what I said. I can only go back to that. Your assembly line example showed a non-random correlation between throughputs and break times. My response was that a non-random correlation between two data points is not a specification, as I see it. We can discover a non-random correlation between rocks, geographic location and actual speed of moving rocks. This can be predictive. We discover that certain sized boulders roll down a hill and end up at the bottom during an avalanche. There is a non-random correlation. But there is no specification. Most importantly, the process exhibits no information. As I argued elsewhere, we do not need CSI measurements to observe the existence of information because information has certain characteristics and properties that we can merely observe. The "I" in CSI means that it is a metric used for information. If a string does not exhibit any characgteristics of information, then you can't do an information meaurement on it. The function of information is to communicate, organize, coordinate, or signal (among other things). In your case, I don't know if there is any real information in the data you analyzed. The work-stoppage is not a signal - although it could be, perhaps. If workers were boycotting and all decided to stop at 3 pm every, there is some information of a very non-complex kind. If your assembly line stopped every day at 3 pm, it would be a non-random occurence. Is it design or not? It could be caused by the machines running out of oil every day at the same time because of a leak, so workers had to stop and refill. That would be non-design. I don't think a simple correlation has specificity and in most cases, it has very little information. As I see it, specificity requires some pattern or reference matching. It's a measure that is used only after information has been observed. (sender, signal, code, transmission, medium, translation, receiver, operation). But I don't know how all I said fits with your questions on CSI so I hope someone else can answer them. That wouldn't Silver Asiatic
Silver Asiatic said, But you also have to look for natural law or law-like processes as the cause. I say, So a CSI calculation is always tentative? Suppose we were to find a Grammar gene that nudged utterances in a way that made "good English" more likely would that render gpuccio's specification invalid? fifthmonarchyman
Is the specification of fluctuations related to attention functional? Is it post hoc?
You're looking at statistical correlation of two events. You can discover that it is very likely non-random. But you also have to look for natural law or law-like processes as the cause. I don't think a simple correlation is specified. Silver Asiatic
gpuccio @627 -
My point is that both the bullethole (the object we observe) and the target (its function) existed before our observation. The target (function) has not been “painted” around a bullethole which hit a homogeneous wall, where no target (no possible function) was present. The target existed before the bullethole hit it.
I agree that the bullet didn't hit a homogeneous wall, rather it struck a wall with lots of possible targets. The specification is the process of picking a target from the multitude of possible targets. The key issue is how you chose the target (i.e. the specification) that you do. Bob O'H
gpuccio, I have a practical question about function as a specification. I run a string that represents production throughputs from an assembly line through my game. I discover a complex pattern with a low probability. Later I discover that the patten is caused by the ebb and flow of production surrounding break times. Production goes up at times when folks are focused on the machine and down when they are not. Leaving aside the question of whether the pattern meets the complexity threshold. Is the specification of fluctuations related to attention functional? Is it post hoc? peace fifthmonarchyman
ZAC says, Can you provide a definition of how you are using the term algorithmic? The basic definition is a list of procedures, so neural nets are non-algorithmic, but can still be based on physical principles. I say, by algorithm I mean a step-by-step procedure. I'm not saying a physical can't be non-algorithmic. I'm only asking if a physical cause can act on matter in a non-algorithmic way. I just don't know enough to say one way or another right now. Peace fifthmonarchyman
gpuccio,
But do you agree that referring to a function which existed before our observation is a logical pre-requisite for a post-specification?
Actually, no, I do not. "An algorithm that solves NP-complete problems in less than exponential time" would be a reasonable post-specification, if it had not been formulated pre-event. Likewise "A proof of Fermat's Last Theorem that is less than two pages" would be reasonable.
My point is that both the bullethole (the object we observe) and the target (its function) existed before our observation. The target (function) has not been “painted” around a bullethole which hit a homogeneous wall, where no target (no possible function) was present. The target existed before the bullethole hit it.
No. The wall (i.e. the set of all possible functions) existed before the bullet struck. ALL targets are human constructs. DNA_Jock
gpuccio But they have nothing to do with the simple fact that the target existed before our observation. Then demonstrate that a specific target existed before our observation, don't just assert it. Show us the a priori specification for any extant protein which demonstrates that exact configuration is the only one possible to support life, any sort of biological life. Adapa
Bob O'H: "Be that as it may, it doesn’t address the problem of Texan sharpshooting: in fact a Texan sharpshooter needs the same condition, because they need the bullet-hole to exist before they paint their target around it!" Why do you say that? My point is that both the bullethole (the object we observe) and the target (its function) existed before our observation. The target (function) has not been "painted" around a bullethole which hit a homogeneous wall, where no target (no possible function) was present. The target existed before the bullethole hit it. It is objectively true that a protein whith a certain AA sequence can act as an enzyme, It was true even before such a protein existed, because it is a necessary consequence of the laws of biochemistry. It was certainly true and implemented after the first snazyme of that type emerged, and well before we observed it. What we add after our observation is only a description of the target as a specification for the bullethole. We have some liberty here, which is what worries DNA_Jock. We must ascertain how big is the target, how many targets exist, at what level of "hit" a target must be cosnidered as "hit", and so on. All those considerations are important for our methodology as soon as we use our post-specification for an inference based on probability. As I have said, those things are important to make our inference reliable. But they have nothing to do with the simple fact that the target existed before our observation. Our post-specification is in reality a specification made after observing a functional object, about a function which existed before our observation. In no way we are "painting" the function. The function is objectively there. I think this should be simple. Why such reluctance to admit what is obviously true? gpuccio
DNA_Jock (and Bob O'H): You seem not to follow my reasoning. My statement is: a) A post-specification is valid only if it refers to something that was equally true before the event as it is after the event. This is true. I am not saying that this criterion is sufficient to make a post-specification methodologically reliable. I am saying that this criterion makes a post-specification logicallt consinstent, and not a logical fallcy. Which is necessary to be methodologically reliable. I have not yet discussed what methodologt can make a post-specification which refers correctly to a function which was there bfore our observation reliable. That is a problem of methodology, not of logical consistency. You have suggested your methodology. I don't agree. I will suggest mine. But do you agree that referring to a function which existed before our observation is a logical pre-requisite for a post-specification? Do you agree that all post-specifications which refer to a function which was generated after the observation (see my example of the random sequence used as a pssword after it has been generated) are logical fallacy? While a post.specification which refers to a function which was there before our observation is not a logical fallacy? This is the only point I have discussed up to now. If you agree, I will discuss how we can describe the target (the function) which was there before our observation, what target we should choose, and what level of function we can use as threshold, as a general methodology, and then you can say why that procedure should not work. gpuccio
Pav: @ 598 :
However, what about the “frame of reference” of the cell itself? From the point of view of an “observer” on a ‘ribosome,’ the m-RNA looks like a ‘formula’ for producing a certain “specified” protein, and enzyme, which, as it turns out, will have a certain type of activity within the cell. But, as far as the “observer” in/on the ‘ribosome,’ it sees nucleotide bases calling for the positioning of particular a.a. in a particular sequence of linkages, which forms an a.a. string that ‘we,’ “observers” in the external “frame of reference” refer to as an “enzyme with kcat > 1000 /s.”
I think you make an interesting point about 'frames of reference'. We have a very parochial, anthropocentric FoR. For example, many seem to view humans as the acme of evolution. We are the best at certain cognitive functions, cool. All sorts of other things we don't do so well - hear, smell, see, tolerate temperature extremes or radiation or immersion in water... Your ribosome-dweller might be parochial too; he might view the proximal goal as being getting rid of this bloody mRNA so that they can get back to the excitement of the 30-50-Svedberg disco, which is what floats his boat. From his perspective, the goal is to drag the bumpy tape through the mechanism until it reaches the point where it falls off, thereby freeing him to go hunting for a nubile fresh 30S partner. Fortunately, there are these chemical cogs that show up to help drag the tape through: they bind to the bumpy tape and translocate it from the A site to the P site. The energy to drive this translocation comes from an energetically favorable peptidyl transfer reaction that occurs at the other other of the cog. An unavoidable waste product of this reaction is a floppy polypeptide chain that wibbles off into the distance. Luckily, we lose that garbage when we lose the mRNA. Wooot! It's time for the disco! DNA_Jock
gpucci @ 589 - OK, I think I see your point now, but I'm not sure it's that important. I have a sneaking suspicion someone could find a counter-example. Be that as it may, it doesn't address the problem of Texan sharpshooting: in fact a Texan sharpshooter needs the same condition, because they need the bullet-hole to exist before they paint their target around it! BTW it might be close to time to start a new thread... Bob O'H
Umm, algorithms are defined as more than just a list of procedures. More like an unambiguous step-by-step process that has to be followed to solve some specific problem or problems. Joe
fifthmonarchyman: Random mutation is not algorithmic in itself but couple it with natural selection and you have an algorithm. Can you provide a definition of how you are using the term algorithmic? The basic definition is a list of procedures, so neural nets are non-algorithmic, but can still be based on physical principles. Zachriel
Gpuccio at 615 re-worked his criterion for a valid post-hoc specification thus:
[a] “It is a specification which, although formulated after having observed a result (so, after the “event” which originated the result), describes a function which was already present before that event.”
No. Whether the function (or pattern, etc.) existed beforehand is NOT the criterion. What I am saying (and I think Bob O’H agrees with me) is that the specification, including all threshold values, is the same as the specification that one would have made before observing any results. One has to imagine that one is totally ignorant of the actual data, and formulate a specification that one might have made pre-hoc. This is rather difficult. Humans are notoriously bad at doing this, and notoriously over-estimate their abilities. Do read Kahneman’s book; it is awesome. Your validity criterion, “a function which was already present before that event” does not do it. Consider your lottery: suppose the functionary has 10^148 brothers, but the brother who wins the lottery has only one leg. “Aha! You cry, the probability than the lottery would be won by a one-legged brother is…[insert math here].” Meets your criterion? Check. Valid? I don’t think so. The correct procedure is as follows: I observe that a one-legged brother won the lottery. I try my hardest to imagine that I did not know this, and ask the question “What is the total set of results that I would find suspicious enough to warrant investigation?” I include all employees of the lottery and their immediate families, I include the cousins and high school friends of the functionary and of various other people highly placed at the lottery. I include all employees of the equipment manufacturer and their immediate families. This is my specification. (There is still the enormous caveat that humans are terrible at imagining and estimating the probabilities of counter-factuals, but we are at least making the effort. Maybe have a friend who does not know the result come up with a specification. No dropping hints, now. If you want other people to give any credence to your post-hoc spec., you better make it broader than anyone would ever make a pre-hoc spec. Even so, ignorance about the system may still be leading us astray...) By way of illustration for proteins, my set of post-hoc specifications for “ATP synthase” (at #174) meets your criterion for being a valid post-hoc spec.:
ATP synthase having Km for Mg.ATP between 0.9e-4 and 1.1e-4 Ki for ADP between 2.8e-4 and 3.1e-4 Ks for Mg2+ having the following pH dependence: pH Ks 7.2 1e-4 7.3 0.9e-4 7.4 0.6e-4 7.5 0.4e-4 7.6 0.2e-4 These values at 25 C in 0.1M KCl. At 0.11M KCl, the values should be…
I hope you can see why this specification is not valid post-hoc. DNA_Jock
Me Think:
ID is about ID agent who/which intervenes in Life processes.
That is incorrect. Perhaps you should actually try to learn about what you are attempting to criticize. Joe
Me thinks say, What do you think is algorithmic about ID agent ? If you are so much into algorithms and you are on a side with no algorithm, I say, This debate has two sides one side that thinks that everything can be accounted for algorithmically and another that posits that algorithms are not up to the task, My side (Godels' side) has no problem with algorithms it just recognizes that there are some things that algorithms are ill-equipped to do. You say obviously you are on the wrong side of the debate! I say, Not mathematically, that question has been answered since 1931 and my side won. We are just waiting for that victory to filter down to the apposing troops ;-) peace fifthmonarchyman
Me-thinks says In sufficient strength, it can even kill, and coupled with external ‘algorithm’, it can do whatever it wants I say I could not agree more. The point is once you connect it to an algorithm the process becomes algorithmic. Random mutation is not algorithmic in itself but couple it with natural selection and you have an algorithm. Subject to the same mathematical laws as all algorithms. That is what Ive been saying all along. Aiguy say, I suspect that this is true, but I’m not sure we have a convincing body of empirical evidence upon which to base a conclusion just yet. I say, The fact is we can collect empirical evidence. That is what Ive been doing with my "game". That makes this question scientific rather than a philosophical. Aiguy says, Here you attempt to establish your particular hypothesis as a default truth, but you have no rationale for doing so. The truth is that we don’t understand the relationship between mind and matter, and we don’t know if anything is non-algorithmic. I say That close to a valid point. Except we have established that somethings are non-algorithmic (think transcendental numbers) and the most complex of those things unitary consciousnesses definitely exhibits the properties we are talking about. In the end this discussion will come down to whether we can conclude that complex non-algorithmic things are conscious. This is the problem of other minds. I can never say for sure if I'm not the only mind in existence. But that conclusion is a lonely one and I'm not going to go there. To each his own though. peace fifthmonarchyman
If this works then the Darwinian camp can be expected to claim that that’s what their theory predicted but since Darwinian theory cannot model intelligence it’s worthless for such comparisons. Best they can do is argue that there is intelligence guiding evolution, from somewhere inside Darwin’s well stuffed Black-Box of natural selection. Adversaries are trapped, exactly where we want them to be. They would be stuck with a Darwinian model that does not sort out Address and Data into a cognitive circuit. Only ID theory has the proper logical construct for this job.
Don't put the cart before the horse. Let's see if you can start off first. Me_Think
fifthmonarchyman @ 607
Can radioactive decay for example trigger a switch with out an algorithm?
In sufficient strength, it can even kill, and coupled with external 'algorithm', it can do whatever it wants - just connect a gamma detector to an Arduino and you can easily trigger a switch. You don't need a inherent algorithm. Me_Think
DNA_Jock (and Bob O'H): No. there is hope. What you say in your #597 allows us to clarify our terms. So, let's see. "The function exists (or not), whether you describe it or not." Your words. That is exactly my point. But you are right, I have not been precise enough in my words. So, let's change my initial definition of a valid post-specification from: "It is a specification which, although formulated after having observed a result (so, after the “event” which originated the result), was equally true before that event." to: "It is a specification which, although formulated after having observed a result (so, after the “event” which originated the result), describes a function which was already present before that event." This is really what I mean by "being equally true, but in the light of your comments I see that it was not clear enough. So, thank you for helping me clarify. However, if you look at my initial statement in post #581, you will see that what I said is: "a) A post-specification is valid only if it refers to something that was equally true before the event as it is after the event." So, if we say "a function which was already present" instead of "something that was equally true", we have maybe found common ground. It is perfectly true that we can give different specifications for the same object. That is an essential part of my dFSCI procedure. We can accept any functional specification for an object. There are two reasons why different functional specifications can be given for an object: 1) The same object can be used for different purposes (so, it can have different functionalities) 2) The definition of a functionality, as said in my dFSCI procedure, must include a threshold for it, and we can choose different thresholds. But this is equally true for pre-specifications and for post-specifications. It is part of the problems in b): what functions can we choose to infer design by post-specifications, and how can we correctly define a threshold for the defined function? This has nothing to do with a). So, I state again a) in the new form, hoping ( :) ) that it can find your agreement: a) A post-specification is valid only if it refers to a function that was already present before the event as it is after the event. IOWs, any specification which does not satisfy a) is not a valid post-specification, and cannot be used, because that would be a logical fallacy. If a specification satisfies a), using it as a post-specification is not a logical fallacy. That does not guarantee that using it is methodologically correct and that it gives reliable results. That is the issue of b): what methodology must we use? But first we must agree on a) gpuccio
Hi FMM,
I never said all physical processes are algorithmic I asked if all physical causes operated algorithmically.
To me these two statements mean the same thing.
Can radioactive decay for example trigger a switch with out an algorithm?
According to modern physics, there is no algorithm that determines when a particle decays.
If consciousnesses is non non-algorithmic then those things [consciousness, learning, etc] are implied.
And I say no, they're not. This is the crux of our argument.
That is unless you have evidence of causes that are non-algorithmic and at the same time can be demonstrated to be none of those things.
Here you attempt to establish your particular hypothesis as a default truth, but you have no rationale for doing so. The truth is that we don't understand the relationship between mind and matter, and we don't know if anything is non-algorithmic.
Do you honestly believe the claim that unitary consciousness is non algorithmic is merely philosophy?
I wouldn't say "it is philosophy" - I would say that this particular claim cannot at this time be subjected to empirical test and verification, nor can it be inferred from our experience. We do not know what consciousness is - we simply experience it - and so we can't begin to speculate about whether it is "algorithmic" or not. Anyway, I think you are not exactly asking if unitary consciousness is algorithmic. Rather, I think you are asking if conscious thought can solve problems that can't be solved algorithmically. I suspect that this is true, but I'm not sure we have a convincing body of empirical evidence upon which to base a conclusion just yet. Even if it is true, that doesn't illuminate the connection between conscious awareness and non-algorithmic thought. Why is there a subjective feeling - a phenomenal experience - to thought? This is the ancient question, and science hasn't made any inroads on that problem, and so we can't say that anything that operates non-algorithmically must therefore be a conscious being with general, human-like intelligence. Cheers, RDFish/AIGuy RDFish
fifthmonarchyman @ 611 Having established that you haven't bothered to be scientific, let's see your answers:
It not about how information is integrated. It’s about the the result of that integration. An Alzheimer patient simply has less nonlossy integrated information than a healthy adult. Nothing problematic here
Scientific unitary consciousness is about integration of information. We are not talking about 'nonlossiness', we are talking about measuring signals as they are sensed in real time by the sensory organ and by the conducting nerves.
Why do you think wavelets are remarkably similar for a large set of memory triggered brain signals? I say, The brain is a physical thing. Physical things tend to have similar properties given similar inputs. No Surprise here
So according to you memories don't differ in the way they are triggered by sensory signals? Everybody has the same memory triggered by similar sensory input ? Me_Think
fifthmonarchyman @ 611
Have you really read scientific literature on consciousness or are you intuiting? I’m barely familiar. Does the scientific literature somehow say mathematics is not binding in this area?
Huh? Of course mathematics is binding in this area. That's the whole point of consciousness computation controversy. If you have not read any paper on consciousness how can you even talk about unitary consciousness in scientific terms. If your debate is philosophical then please don't masquerade your comments as if they are scientific.
If you think it is about algorithms, then you are on the wrong side of the debate! I say, How so?
What do you think is algorithmic about ID agent ? If you are so much into algorithms and you are on a side with no algorithm, obviously you are on the wrong side of the debate! Me_Think
Me_Think said, Why do you think the sensory signals get encoded differently in healthy and Alzheimer brain (deemed less conscious) ? I say, It not about how information is integrated. It's about the the result of that integration. An Alzheimer patient simply has less nonlossy integrated information than a healthy adult. Nothing problematic here you say, Why do you think wavelets are remarkably similar for a large set of memory triggered brain signals? I say, The brain is a physical thing. Physical things tend to have similar properties given similar inputs. No Surprise here You say, What do you know about controversy in decomposing signals? I say, Not much. Does the controversy change the laws of mathematics? you say, Have you really read scientific literature on consciousness or are you intuiting? I say, I'm barely familiar. Does the scientific literature somehow say mathematics is not binding in this area? you say, If you think it is about algorithms, then you are on the wrong side of the debate! I say, How so? peace fifthmonarchyman
fifthmonarchyman @ 605
That is what you mean – other people (like gpuccio) mean something else. I say. Are you so sure????
ID is about ID agent who/which intervenes in Life processes. No one knows why ,how or when. The definition of ID agent - like all definitions in ID - is sufficiently vague to give enough wiggle room to ID proponents. If you think it is about algorithms, then you are on the wrong side of the debate! Me_Think
fifthmonarchyman @ 596
That is just restating the original point. If human consciousness can be computed algorithmically then human consciousness is not unitary. But we all know it is!!!!
Why do you think the sensory signals get encoded differently in healthy and Alzheimer brain (deemed less conscious) ? Why do you think wavelets are remarkably similar for a large set of memory triggered brain signals? What do you know about controversy in decomposing signals? Have you really read scientific literature on consciousness or are you intuiting? Me_Think
fifthmonarchyman:
The eureka moment in this whole enterprise came when I realized that all Dembski was doing with CSI was looking for a better more objective Turing Test. That simple realization moved ID from interesting apologetics to a very practical straight forward scientific endeavor in my mind. Peace PS I might be asking for your coding assistance at some point :-)
I sense that I will soon need to get to work on a CSI-ID Lab, designed to allow comparison of the model's memory contents to genetic memory contents! The hard part (at least on my end) would be for each chromosome sort out Data coding elements (genes) from sensory pathways that Address (selectively control transcription of) motor/muscle action of self-powered molecules. Whatever string is used to represent all the transcription controls at a given Data location is then used as the unique Address at which that gene Data (protein sequence manufactured and/or its behavior) is stored in memory. Where coding region Data has multiple product possibilities it's also in memory multiple times for each possible protein sequence, but normally only one Data location would be Addressed (active) at a time. It would then be possible to compare memory contents of the computer model to genomes of living things. Each species should have their own signature memory distribution. As in the ID Lab: as more bits are added to the Address bus (complexity increases) less of the memory space is actually used, is never Addressed therefore has no Data stored. A number of times I tried to solve the problem of the chart line for total memory used only a pixel or two up from the zero line, but I kept finding out that's just what happens even where being careful to make good use of sensory bits. When for example when the motor/muscles are accelerating to make it spin in a given direction the sensory from antenna always indicates opposite deflection, never the other possibilities. Most of addressing space never being used has been an annoying problem (like the program was not working right) but it now makes sense that this also happens in the model too. All indications are that the two would be very comparable. If this works then the Darwinian camp can be expected to claim that that's what their theory predicted but since Darwinian theory cannot model intelligence it's worthless for such comparisons. Best they can do is argue that there is intelligence guiding evolution, from somewhere inside Darwin's well stuffed Black-Box of natural selection. Adversaries are trapped, exactly where we want them to be. They would be stuck with a Darwinian model that does not sort out Address and Data into a cognitive circuit. Only ID theory has the proper logical construct for this job. Gary S. Gaulin
You say, Nobody knows if all physical processes are algorithmic. Think of radioactive decay for example. I never said all physical processes are algorithmic I asked if all physical causes operated algorithmically. Can radioactive decay for example trigger a switch with out an algorithm? You say, “Intelligence” often is taken to connote consciousness, the ability to learn, the ability to use generally expressive natural language, and so on. The term “non-algorithmic” implies none of these things. I say, If consciousnesses is non non-algorithmic then those things are implied. That is unless you have evidence of causes that are non-algorithmic and at the same time can be demonstrated to be none of those things. you say, This is a perfectly respectable philosophical position. I say It is exactly my position as well by the way. Do you honestly believe the claim that unitary consciousness is non algorithmic is merely philosophy? If so I have an experiment I want you to be a part of. I'll get back to you on it soon. ;-) peace fifthmonarchyman
Hi FMM,
I can think of no way for a physical cause to operate other than algorithmically. Can You?
Nobody knows if all physical processes are algorithmic. Think of radioactive decay for example.
RDF: because the word “intelligence” has connotations that are very different from “non-algorithmic”. FMM: such as? I can’t think of any.
"Intelligence" often is taken to connote consciousness, the ability to learn, the ability to use generally expressive natural language, and so on. The term "non-algorithmic" implies none of these things.
Are you so sure???? [that different ID proponents give radically different definitions of "intelligent cause"]
Yes, I'm quite sure :-)
GPUCCIO: In my opinion, consciousness is a primary reality, which has its laws and powers....
This is a perfectly respectable philosophical position. There is no scientific support for such an hypothesis, however, and so if ID rests on the truth of this, ID belongs in the Philosophy aisle rather than Science. If ID was up front about this, I would find it interesting. It is only the pretense of empirical support and this misleading "known cause" rhetoric I object to. Cheers, RDFish/AIGuy RDFish
Aiguy said, That is what you mean – other people (like gpuccio) mean something else. I say. Are you so sure???? Here is a quote again from the very beginning of this thread from gpuccio Quote: In my opinion, consciousness is a primary reality, which has its laws and powers. Its fundamental ability to always be able to go to a “metalevel” in respect to its contents and representations, due to the transcendental nature of the “I”, is the true explanation for Turing’s theorem and its consequences, including Penrose’s argument. The same is true for design: it is a product of consciousness, and that’s the reason why it can easily generate dFSCI, while nothing else in the universe can. The workings of consciousness use the basic experiences of meaning (cognition), feeling (purpose) and free will. Design is the result of those experiences. dFSCI is the magic result of them. End quote: sounds to me to be exactly what I'm saying. Just said more more eloquently Peace fifthmonarchyman
Aiguy said, because the word “intelligence” has connotations that are very different from “non-algorithmic”. I say, such as? I can't think of any. That is once it is fully understood what it entails for something to be the result of an algorithm peace peace fifthmonarchyman
gpuccio, re 571 and 572, Thanks again for your responses, and sorry for taking so long to respond. I don't quite understand your comments; do you see P(T|H) as part of the specification step? That's not how I've understood it. Also, you seem to subscribe to Dembski's claim that CSI (whether of the Dembski, KF, gpuccio, or now Arrington flavors) cannot generate a false positive. But if the calculation is based explicitly on the state of your knowledge, won't it generate a false positive whenever you incorrectly believe that no non-design origin is feasible? Please excuse me if you've already explained this--it's a long thread to go through cold. Learned Hand
Hi FMM,
You say Which of the following descriptions are entailed by “intelligent cause” in the scientific context of ID Theory? I say, None of the above. When we say a thing is the result of an “intelligent cause” we simply mean that it can not be arrived at algorithmically.
That is what you mean - other people (like gpuccio) mean something else. I've gotten many, many conflicting answers to this sort of question here. In any event, given what you mean, it should be called "non-algorithmic design theory", not "intelligent design theory", because the word "intelligence" has connotations that are very different from "non-algorithmic". Cheers, RDFish RDFish
You say, In other words, evolutionary mechanisms may not be algorithmic, but they certainly proceed according to physical causes.) I say, I can think of no way for a physical cause to operate other than algorithmically. Can You? Peace fifthmonarchyman
Hey AIGUY You say Which of the following descriptions are entailed by “intelligent cause” in the scientific context of ID Theory? I say, None of the above. When we say a thing is the result of an "intelligent cause" we simply mean that it can not be arrived at algorithmically. We have in mind something with a "Turing Oracle" at it's core. Please check out this paper. http://www.blythinstitute.org/images/data/attachments/0000/0041/bartlett1.pdf. It was first linked at the very beginning of this thread.It will make the discussion here simpler for all of us. After that I suggest you take the time to browse the entire 600 comments. There is some good stuff here. As you know I'm very familiar with your argument and I think you will find this particular thread it to be quite interesting. I don't know if you will in the end be satisfied but perhaps we can move the discussion forward a notch or two peace fifthmonarchyman
As I have been arguing in many ways for many years, ID relies on assumptions that are in the domain of philosophy mind, and are currently unanswerable scientifically. People here have expressed their conviction that conscious experience (a) transcends physical cause, and (b) is fundamental to our ability to produce complex mechanisms. If these hypotheses are not correct, then neither is ID (or, more accurately, ID would need to be restated in a form that is radically different from the way it is presently understood). (Some discussion here has also centered on the question of whether mental abilities and/or consciousness are algorithmic, but this concern is subsumed under the question of physical causality. In other words, evolutionary mechanisms may not be algorithmic, but they certainly proceed according to physical causes.) The plain fact is that nobody knows if our minds transcend the physical operation of our bodies, and nobody knows if conscious awareness is necessary for the accomplishment of mental tasks - either in humans or in hypothetical entities that may not even have physical brains. Moreover, nobody knows if or how distinct mental abilities may arise from a more general, unary source (in other words, we have no theory that describes some single aspect of mental function which, for example, can both generate a grammatical sentence and generate a blueprint for a complex machine). My main beef with ID is that it sweeps these inconvenient facts under the rug. ID offers "intelligence" as a "known cause" of complex mechanisms, when in fact the known cause is "human beings". The way ID reifies "intelligence" as some unary thing that exists (or could exist) independently of complex human bodies relies on folks' intuitive dualism and lack of understanding of the issues considered in philosophy of mind. The result is that people people are convinced that the design inference is a straightforward abduction offering a familiar cause for our observations of biological complexity. But that only works if one already believes that minds transcend mechanism. Otherwise, all ID can say is that biological complexity arose from some complex, unknown entity/process/force/principle that may or may not be conscious, and may or may not share any other attributes with human mentality. It is easily demonstrable that ID relies on this sleight-of-hand: I periodically ask questions such as this here, but ID proponents are loathe to answer: Which of the following descriptions are entailed by "intelligent cause" in the scientific context of ID Theory? 1) Not due to chance and/or necessity 2) Due to the actions of an entity or process that may or may not be conscious 3) Due to the actions of a conscious entity 4) Due to an entity that has all the mental abilities of normal human beings, including the ability to learn and to use a generally expressive grammatical language Responses to my arguments are generally of one of the following types: A) a refusal to answer the question and continue to equivocate on the meaning of mentalistic terms B) a declaration of an opinion on these matters without referencing actual scientific support C) a complete misreading of my argument, and a presentation of various claims about evolution or probabilities that have nothing to do with my point None of those responses defend ID against my complaint: ID relies on particular assumptions regarding the mind/body problem that are currently answerable by appeal to scientific evidence. By refusing to declare what exactly is supposed to be entailed by the term "intelligent cause", ID pretends to support an inference that is in fact scientifically unsupportable. Cheers, RDFish/AIGuy RDFish
DNA_Jock:
The catalytic activity is an attribute of the enzyme. When we say “An enzyme with kcat > 1000 /s” the phrase within quotation marks is the specification. The catalytic activity is NOT the specification; the words that describe the catalytic activity are the specification.
Sorry to but in again, yet, I must comment here. You're correct in saying this is "the nub of the problem." I'll use an analogy to try and get at what I think the disgreement hinges on. In special relativity, one is always concerned with "frames of reference." What looks like "motion" in one frame, might look like the complete absence of "motion" in another, if one frame is in motion relative to the other. The point here is that "observers" in two different systems see things differently. Your position on "specification" says that it is the "words that describe the catalytic activity" form the basis of the specification. This is from the "frame of reference" of human beings. With this, I don't disagree. However, what about the "frame of reference" of the cell itself? From the point of view of an "observer" on a 'ribosome,' the m-RNA looks like a 'formula' for producing a certain "specified" protein, and enzyme, which, as it turns out, will have a certain type of activity within the cell. But, as far as the "observer" in/on the 'ribosome,' it sees nucleotide bases calling for the positioning of particular a.a. in a particular sequence of linkages, which forms an a.a. string that 'we,' "observers" in the external "frame of reference" refer to as an "enzyme with kcat > 1000 /s.” Our definition is not critical; but the "specification" of the m-RNA strings of nucleotide bases is. Let me just leave off here. PaV
Gpuccio, I know that you are hopelessly wrong on this. I think we do agree on one thing: there is no hope. I think the problem is that you are incapable of seeing the difference between the attributes of an object, and the way we describe and delimit the attributes of an object.
No. The function is the specification. We describe the function. We don’t invent it.
This is the nub of the problem. The function is NOT the specification. This is so monumentally wrong that it hurts. The function exists (or not), whether you describe it or not. The catalytic activity is an attribute of the enzyme. When we say “An enzyme with kcat > 1000 /s” the phrase within quotation marks is the specification. The catalytic activity is NOT the specification; the words that describe the catalytic activity are the specification. We may specify an enzyme in terms of its activity, but that does not mean that the activity is the specification. We could specify an enzyme in terms of its tryptic fingerprint, or molecular weight, or migration under PAGE. In each case, the specification is a group of words that describes the requirements of the specification. Consider, for a moment, a cat. It exists. I can refer to it as follows = A cat = A grey cat. = A grey long-haired cat = A grey, four-year-old long-haired cat = A grey, four-year-old double-pawed long-haired cat = A grey, four-year-old double-pawed long-haired cat called Sox = A cat with RFID implant #15479364413 Each of these phrases is a specification for the cat. The cat is not the specification. The cat has not changed. The specification lists attributes that some cats may possess, others not. (As an aside, we could additionally specify all sorts of other things about the cat that would not change the membership of the set that meets the specification -- the cat does not have eight legs, the cat has not graduated from UCLA, etc, etc -- such aspects would only serve to make the specification longer, which some find confusing.[aardvark!]) We use the specification to specify :) which cat(s) we are talking about. The same event can be described in different ways, depending on the specification used. “Hey DNA Jock, I just crushed a [insert cat specification here]” Notice how the probabilities change, depending on the specification used. DNA_Jock
me_thinks says, Unitary consciousness too has been mathematically defined by various authors (including in the paper you cited !!). I say I agree that it has been mathematically defined. However There is an infinite gap between defining something mathematically and proving it. You can fit all the digits of pi in that space. You say. The dispute is in the concept and whether the input signals received by brain are decomposable. If it is decomposable then, unitary concept doesn’t exist. I say, That is just restating the original point. If human consciousness can be computed algorithmically then human consciousness is not unitary. But we all know it is!!!! peace fifthmonarchyman
keith s: This comment bears repeating: "Just to be clear. I have already answered your “objections”. I will not do it again. You seem to love repetitions. I don’t." gpuccio
Gary S. Gaulin: I agree with you on the huge complexity of the interaction between consciousness and algorithmic complexity. That is a fascinating field. I need some time to read your material, and my time in this moment is absorbed by many thing, some of which can be seen here. But this is an important issue. I don't want that anyone think that my firm conviction that consciousness is the primary reality, and cannot be explained by configurations of matter, makes me in ant way not interested to the fascinated interplay between the conscious subject and its manyfold representations, many of which are algorithmically evolving. One of the most interesting aspects of consciousness is that it always react to the things it represents in two different modalities, which are indeed so connected that they can be probably considered as the two faces of one reality: one is cognition (meaning), and the other if feeling (purpose). I believe that we cannot represent anything without reacting to it as meaning something, and at the same time as being good or bad, desirable or painful. Design is a very good epitome of all that: it is essentially a meaning used to implement a purpose. gpuccio
gpuccio, This comment bears repeating:
Gpuccio also fails to see that when speaking of evolution, the only target specification that ever makes sense is “changes that improve reproductive success”. Evolution wasn’t shooting for “ATP synthase” or “traditional ATP synthase”. It was searching for anything that would improve fitness. And even if he were to use this corrected specification, dFSCI would still be useless, because taking the ratio of target space to total space only makes sense if you are talking about a purely random search. Gpuccio has been reminded over and over that evolution is not a purely random search. It includes selection, which is highly nonrandom. P(T|H), where H includes “Darwinian and other material mechanisms”, is the stumbling block. Dembski cannot calculate it. Neither can gpuccio or KF.
keith s
DNA_Jock: You say:
I made two objections to your 3-protein fit: #1 that the sequence conservation around an optimum tells you about the sequence constraint around an optimum, nothing more. #2 that you should have aligned all 23,949 sequences. #1 is the killer, making #2 moot. Sorry if you did not pick up on this – I probably could have expressed myself more clearly. You can’t use Durston’s method.
My recent posts about my new calculations were, explicitly and declaredly, only an answer to your #2. #1 and the validity of the Durston method I will discuss later, when we discuss the protein space. One thing at a time. gpuccio
DNA_Jock: "The bullet did not hit a target that already existed. You do not ‘observe’ the target. All of the bullet holes, both the ones we have noticed and the ones that we have not noticed, were present before any humans existed. Humans arrive after the fact and apply paint. What you are doing is imbuing the wall with a new and wonderful property – saying in effect that particular spots on the wall are super-special, and the bullets, amazingly, hit these super-special spots." Here your error is obvious. For me: A bullet is: Any sequence of AAs which comes from random variation Any sequence of random characters which comes from a random character generator Any ticket which is extracted in a lottery A target is: A sequence of AAs with an enzymatic activity A sequence of characters which has meaning in English An extracted ticket which belongs to the brother of the functionary who presides to the extraction. So, both the bullets and the targets were there before we observed the before and before we described the target. This is simply true. You say that we paint the targets, "imbuing the wall with a new and wonderful property – saying in effect that particular spots on the wall are super-special" So, you are saying that an enzymatic activity is "a new and wonderful property", that English meaning is not "super-special" if compared with a meaningless random sequence, and that the brother of the functionary becomes his brother only because we are biased. Again: I really think that you are deeply wrong on this point. gpuccio
DNA_Jock: "You appear to be completely unable to distinguish the function from the specification that delineates the function. These are two different things." No. The function is the specification. We describe the function. We don't invent it. If you cannot agree on that, there is no hope. Again, the brother is the brother. We acknwoledge that simple and meaningful fact. What are we "delineating"? The rules which make some sentence meaningful in English are the result of two set of functional principles: 1) The rules of English language. 2) The general rules of meaning, if they exist. Both existed before Shakespeare, or anyone else, wrote the sonnet or any other piece of English language. Again, what are we delineating? You can say that we choose some function rather than another one. That is true. Or some level of function rather than another one. That is true. That is the problem we will discuss in our discussion about our original b) (if we arrive there). For now, we are discussing a). The function must be something that existed before our observation of the specific object which implements it. That is independent of the specific object which implements it. We cannot invent a function. A function is objective, in the sense that it is something which can really be done with the object, or with other different objects, which share the same functionality. We are absolutely bound, in describing the function, by what it really is. Again, this is completely different from choosing one function over others, or defining the minimal level of the function for the inference. That is b). a) is another thing. I really think that you are deeply wrong on this point. gpuccio
Bob O'H: I said: "It is a specification which, although formulated after having observed a result (so, after the “event” which originated the result), was equally true before that event." You said: "How can a specification be true?" What I mean is: My specification is: Being the brother of the functionary who presides to the extraction (and therefore apt to be an accomplice to him). Is that true now? Yes. Was that true before the extraction? Yes. My specification is: Having good meaning in English. Are the rules by which a sentence has good meaning in English different before and after I read the specific sentence (for example, Shakespeare's sonnet)? No. An enzyme accelerates 1000 times a reaction. It does that now, after I have observed it. Did it do that a million years ago, when no human observed it? Yes. I set a random sequence, after having observed it, as the password for my safe. My specification is: Being a password for my safe. Can I use that as a post-specification, to infer that an I got a complex sequence randomly (which I did) which is functionally specified as being the password for my safe? No, because now that sequence is the password for my safe, but it was not when I first observed it. The functional link between the sequence and its function has been created after the random generation of the sequence. This kind of specification can only be used as a pre-specification, for a new random search. It's rather simple, after all. What is the problem? gpuccio
gpuccio @584 -
It is a specification which, although formulated after having observed a result (so, after the “event” which originated the result), was equally true before that event.
How can a specification be true? I think you're saying the same thing as me, i.e. a post-specification is valid if it's the same as the specification we would get if we had made a pre-specification, but I'm not sure, and would like to get this clear before going any further. Bob O'H
gpuccio, re your 570, I made two objections to your 3-protein fit:
#1 that the sequence conservation around an optimum tells you about the sequence constraint around an optimum, nothing more. #2 that you should have aligned all 23,949 sequences.
#1 is the killer, making #2 moot. Sorry if you did not pick up on this - I probably could have expressed myself more clearly. You can't use Durston's method. Also Frequentist approach with post-hoc specs is deeply, deeply problematic. I think the term I used was "garbage". DNA_Jock
Gpuccio, I share Bob's confusion with what "something that was equally true before the event as it is after the event" means. You appear to be completely unable to distinguish the function from the specification that delineates the function. These are two different things.
IOWs, we must observe both the bullethole and the target. The bullethole is our object. The target is a function defined independently from it. And the bullethole is in the target, but not because we have painted it after the bullet hit the wall. IOWs, the position of the bullethole has not generated the target, the bullethole has only hit a target which already existed.
Yet again repeats your TSS fallacy. The bullet did not hit a target that already existed. You do not 'observe' the target. All of the bullet holes, both the ones we have noticed and the ones that we have not noticed, were present before any humans existed. Humans arrive after the fact and apply paint. What you are doing is imbuing the wall with a new and wonderful property - saying in effect that particular spots on the wall are super-special, and the bullets, amazingly, hit these super-special spots. You genuinely believe that these spots are special; that you are guided by the texture of the wall when you choose where to apply paint. But you are not. As you have demonstrated twice on this thread, you are applying paint where you have observed bullet holes. DNA_Jock
Bob O'H: Please, read #584. If you still have doubts, or objections, try to specify them. #581 was just an introduction. gpuccio
DNA_Jock (and Bob =’H): Another premise is that what I will discuss about a) and b) is, among other things, an answer to the "Texas sharpshooter" argument. I have already debated with you (DNA_Jock) about the fact that the "Texas" argument is applied, in different ways, to two different problems: 1) Painting targets around random bulletholes which were aimed at no target. 2) Painting wrong targets instead of correctly describing the targets which were already there. Point 1) more or less corresponds to a), while point 2) is relevant to b). I paste here, as an introduction, what I had already written to you (DNA_Jock) in another thread:
The specification is made post-hoc, in the sense that the description of the function is given after we observe it, but the function is not post-hoc: the function exists independently. I think that you are strangely mixing two different problems. One is that the definition of the function of a protein is done from the observation of the protein. As I have said, this is post-hoc only in a chronological sense, not in a logic sense: we are not imagining the functionality because we see it. We realize that the functionality exists because we see it. There is an absolute objectivity in the ability of an enzyme to accelerate a reaction, like there is an absolute objectivity in the ability of a text to covey meaning. These things are not “painted” post hoc. So, your interpretation of the need to explain them as a fallacy is really a fallacy. The second aspect is what I call “the problem of all possible functions”.
. So, for the moment, let's detail better a). What is a valid post-specification? It is a specification which, although formulated after having observed a result (so, after the "event" which originated the result), was equally true before that event. From this concept derives a very simple consequence: we cannot use the contingency in the result to post-specify. That contingency can only be used for a pres-specification (IOWs, to compute the probability of a new independent recurrence of the outcome). Let's take the sequence with good meaning in English. What has good meaning in English is valid at any time (with possible variations as the language changes), not only after having read a specific sequence with good meaning. But that is only true since English exists. So, let's say that I generate a random sequence of 600 characters, and then I build a new language which uses the specific configurations in that sequence so that they become part of a correct meaning in the new language I have creates. That is Teas sharpshooting. I am painting a target which did not exist before around a random bullethole. This is the main meaning of the fallacy. And that is the reason why we cannot use the contingency in the outcome to post-specify. Interestingly, the only real attempts to offer a "false positive" to my dFSCI procedure derive from this fallacy. Mark has tried something like that (again, in perfect good faith). So, when I post-specify, I must refer to some function in the object that is not the mere sequence of its bits. I cannot say: well, I have this random sequence, now I set it as a password for my safe, so now it is specified. This is a fallacy. But when I say: "this sequence has good meaning in English", or, more generally: "this sequence is made of English words", I am not using any specific contingency in the observed sequence to define the function. Even before the emergence of that specific sequence, what has meaning in English and what words are English words was well defined. My specific example does not contribute to the definition, nor changes it. That's what I mean when I say: "The specification is made post-hoc, in the sense that the description of the function is given after we observe it, but the function is not post-hoc: the function exists independently." IOWs, we must observe both the bullethole and the target. The bullethole is our object. The target is a function defined independently from it. And the bullethole is in the target, but not because we have painted it after the bullet hit the wall. IOWs, the position of the bullethole has not generated the target, the bullethole has only hit a target which already existed. I am not discussing here how likely that was. IOWs, I am not discussing here how many targets existed, how big they are, and how likely it was to hit one of them without aiming. That will be part of the discussion about b). This is a). The only relevant thing here is: the target existed before the bullethole. It was not painted after, using the contingent information about its position on the wall. OK, the discussion is open on a). gpuccio
gpuccio @ 581 -
So, in brief, I want to discuss two important aspects which must be very clear when we use post-specifications in our reasoning: a) A post-specification is valid only if it refers to something that was equally true before the event as it is after the event.
I'm sorry, I don't follow what you mean here. Can you explain? Bob O'H
fifthmonarchyman @ 579 and 580,
There is no direct way to prove it mathematically because to prove it would be to compute it and that was the whole point in the first place. Nevertheless we all know unitary Consciousness exists because we experience it directly in our own lives. Don’t get lost in the weeds here You already accept the reality lots of things that can’t be “proven” empirically to actually exist. Things like Pi and e.
Both Pi and e are mathematical concept of irrational number and have been computed (of course since they are irrational, the decimals don't end). Unitary consciousness too has been mathematically defined by various authors (including in the paper you cited !!). The dispute is in the concept and whether the input signals received by brain are decomposable. If it is decomposable then, unitary concept doesn't exist. Me_Think
DNA_Jock (and Bob ='H): I obviously disagree with most of what you say at post #556. Not all. I certainly agree that a good understanding of the system is fundamental. For any inference about a system, not certainly only for ID or for Bayesian reasoning. That said, I would like to go on this way. I will not answer directly your arguments, at least at the beginning, but just go on discussing the methodology appropriate for post specification based inferences, given that Both you and Bob O'H have graciously acknowledged that post specifications are not a logical fallacy in themselves, although . So, I start. To discuss with order, a few premises. First of all, we are discussing the general methodology here, so for the moment no digression to problems of the protein space. I have already admitted that understanding the system is always fundamental, and therefore all the issues about the specific protein space are legitimate when we make inferences about the protein space. But I want to go on with order. First the general methodology, which is valid for any space. Then, the specifics of proteins. So, please no digressions for the moment. I will also stick to a frequentist approach, because I am convinced that it works very well if the necessary cautions are applied. However, I will try to answer your specific objections during the discussion. So, in brief, I want to discuss two important aspects which must be very clear when we use post-specifications in our reasoning: a) A post-specification is valid only if it refers to something that was equally true before the event as it is after the event. I suppose that is more or less what Bob means when he says: "I think the way to make a post-specification valid is to try to make it as close as possible to a pre-specification. Would you agree?" And, if that is what you mean, Bob, I agree. b) The second point I want to discuss is about the choice of the target space among all possible target spaces. But I will detail that after I have discussed a). OK? So, let's start. gpuccio
Me_thinks, Don't get lost in the weeds here You already accept the reality lots of things that can't be "proven" empirically to actually exist. Things like Pi and e. In the end this entire business was anticipated by Godel and ultimately by Plato Peace fifthmonarchyman
Me_thinks said Unitarity of consciousness is not proven, so we can’t conclude that it is not computable. I say, Unitary consciousness existence is proven intuitively. There is no direct way to prove it mathematically because to prove it would be to compute it and that was the whole point in the first place. Nevertheless we all know unitary Consciousness exists because we experience it directly in our own lives. It's a paradox. Sort of like Descartes "I think therefore I am". peace fifthmonarchyman
keith s:
Evolution is not a purely random search.
It isn't even a search.
It includes selection, which is highly nonrandom.
LoL! Natural selection is non-random in a most meaningless way- that being not every individual has an equal probability of being eliminated. Why keith s thinks that saves unguided evolution is beyond me. Joe
Gary S Galuin The eureka moment in this whole enterprise came when I realized that all Dembski was doing with CSI was looking for a better more objective Turing Test. That simple realization moved ID from interesting apologetics to a very practical straight forward scientific endeavor in my mind. Peace PS I might be asking for your coding assistance at some point :-) fifthmonarchyman
DNA_Jock: Errata corrige! In my post #570, the reference to percentiles is completely wrong! I must have rushed it in a moment of complete mental confusion. :) So, I am changing the phrase: "IOWs, by my “cherry-picking” I have caught approximately the 90% percentile of the general conservation as ascertained in the whole set of sequences. Not a bad result, I would say, for a quick approximation." as follows: "IOWs, by my “cherry-picking” I have essentially caught those positions which retained a very high conservation (more than 90%) when ascertained in the whole set of sequences. Not a bad result, I would say, for a quick approximation." The concept remains absolutely valid: it is a very good result. But I apologize for the error. This remains for the record of my sins! gpuccio
Keith S
If gpuccio isn’t aware of how it could have evolved, then it must be designed.
And are you aware how it evolved? How did it just emerge? Can you give us some insight on this emergence mechanism that unguided evolution has? Andre
Keith S
And that is why the numerical value produced by your dFSCI computation is irrelevant. Evolution is not a purely random search.
So its not unguided? mmmmmmmmmmm yet unguided evolution is the best explanation for guided searches? You are in over your head here...... Andre
gpuccio,
I refer only to the probability of finding the target space by a random search.
And that is why the numerical value produced by your dFSCI computation is irrelevant. Evolution is not a purely random search. You take selection into account in the other part of dFSCI -- the boolean part -- but the way you do so is pitiful. It amounts to this: If gpuccio isn't aware of how it could have evolved, then it must be designed. It sounds a lot better to say "it has 759 bits of dFSCI", doesn't it? Too bad that doesn't mean anything useful. keith s
Learned Hand:: Maybe this simple comment can help: I define specification as nay rule which generates a binary partition in the search space (target space vs non target). I refer only to the probability of finding the target space by a random search. I support the validity of the procedure empirically, as shown by its absolute specificity as estimated by a 2x2 verification table in all possible tests, and not as a logical necessity. gpuccio
Learned Hand: I never refer to Dembski's last paper on specification. I don't criticize it: I simply don't understand it, so I can neither refer to it for my reasoning, nor criticize it. That's why I never use the P(T|H) formalism in my reasoning. Keith has kindly reposted a brief summary of my empirical procedure in post #15 of this thread. You can refer to that, as a first approach. There is no P(T|H) formalism in it. I am ready to discuss any part of my approach as I have detailed it. gpuccio
DNA_Jock: For the moment, I am not (yet) discussing here the validity of the dFSCI computation in ATP synthase to infer design. I want to discuss before the methodological aspect, and that I will do later in the day. I am just asking if you agree with a simple fact: that my "cherry-picking" of three distant sequences and my shortcut of referring only to absolutely conserved positions in them was not so unreasonable and, as I expected, had given a serious underestimation of the complexity as estimated by the Durston method. Do you agree with that? As an interesting aside, I have checked another aspect of the results of the multiple alignment. 167 positions have a conservation, in the whole group of sequences, higher than 90%. That is very near to the number of absolutely conserved positions in my three distant sequences (176). IOWs, by my “cherry-picking” I have essentially caught those positions which retained a very high conservation (more than 90%) when ascertained in the whole set of sequences. Not a bad result, I would say, for a quick approximation. And, as I expected, the overestimation of the complexity in the "ultraconserved" positions is vastly compensated by the underestimation of the complexity of all the other positions (which, in my shortcut, was set to 0), with a global "loss" of functional complexity of about 719 bits in the first evaluation. So, this is just an answer to your previous comment: "Thank you REC @ 209 for running the alignment on a decent number of ATPases. 12 residues are 98% conserved. I suspect he might have been better off going with his Histone H3 example, but H3 doesn’t look complicated. Re your reply to him at 232. I won’t speak for REC, but I am happy to stipulate that extant, traditional ATP synthase is fairly highly constrained; you could return the favor by recognizing that this constraint informs us about the region immediately surrounding the local optimum, nothing more. Rather , I think the problem is with your cherry-picking of 3 sequences out of 23,949 for your alignment, which smacks of carelessness. Why not use the full data set?" Well, now I have done exactly that. gpuccio
I'm sorry I missed most of this conversation. gpuccio, you mentioned on the other thread that you had explained elsewhere why you feel that P(T|H) can be omitted from your version of the CSI calculations. Would you mind linking to that explanation? Learned Hand
Gpuccio:
Thank you to you too. I will try to comment on what you say tomorrow.
I'm looking forward to it. You seem to have the basics of my approach to the problem, where I focus on modeling rudimentary intelligence resulting in numbers that have the signature of intelligence in there, somewhere. But figuring out what to look for is such a task in itself I'm best off just explaining that part to those here who are already working on it for genetic code (that only gets far more complex than what the simple model ends up with in memory) and even English language where transmission is by muscle control of air flow, and body movements where in the game of Chrades only body language is allowed to communicate words. I can also add this video that abstractly illustrates what happens when sounds are temporally stored then decoded to different notes from musical instruments recalled when heard, which in our mind play along with it like this: Animusic HD - Fiber Bundles (1080p) https://www.youtube.com/watch?v=M6r4pAqOBDY Human language decodes to sounds that sometimes resemble what something makes. Meaning can change just by the way it's said. Showing the complexity of all that is a giant task. Also gets into sounds having waveshape and motion through 3D stereophonic space that also conveys information that paints a picture, as this video helps show: "Harmonic Voltage" - Animusic.com https://www.youtube.com/watch?v=rGCTLJDoMGw Some sounds like squeaky chalk send unpleasant "chills down our spine" while others in right combination are soothing, exciting, refreshing, etc.. The premise of the theory of intelligent design sends chills down the spine, of some. While to others being properly word for word stated is like music to our ears. We consciously feel sound, and words can hurt. All this only further adds to the complexity of Human language. So it's good to see others at least trying to make better sense of it all. And some being religiously motivated is fine by me, though it makes some others nervous. Gary S. Gaulin
gpuccio @ 543
Consciousness is unitary, because the I which perceives is always the same subject. The things it perceives vary a lot, but it is the same subject who perceives them.
Don't you think that's pretty vague for computation purpose ? I got what is meant by Unitary from paper cited by fifthmonarchyman. Unitarity of consciousness is not proven, so we can't conclude that it is not computable. Me_Think
I apologize for misunderstanding your comments earlier. After rereading them it seems that we are allies here.
No apology needed. I expected it would take weeks maybe months to exchange all the information that we together have. You helped me know what I needed to next explain. What you said in the rest of your reply has me thinking about the Turing Test. Now that Eugene Goostman is (controversially) said to have passed the Turing Test it's like the whole idea of how well a machine fools someone into thinking that they are human is a bad way to qualify "intelligence". It's in a way like programmers of giant supercomputer models at IBM and elsewhere are more like disgusted by the whole affair that beat them with what was mainly seen as a dumb chatbot. Top researchers can easily agree that a better test than Turing's is needed. This ID theory has a way of doing away with that by intelligence being qualified by its indicative systematics. In the Introduction of the theory I use IBM Watson as an example of what does qualify as intelligent, which in turn makes Eugene something that came later to sort of take the wind out of the sails of all others. Where ID theory already does away with a test that did not work out as planned it's best to not even waste time trying to patch up old junk that already lost its novelty. An in this case it's infinitely easier to just make all that via antiquation gone, into the dustbins of history. In its place is a more reliable test that comes from Theory of Intelligent Design. One replacing the other is like an empire builder's dream come true. But where science allows it, it's fair to show no mercy at all towards subjective tests that created a void that can only be filled by what the ID theory now explains. We are definitely allies, in a very science changing theory. That's why I'm now here carefully explaining what I have so far, to you. I always needed to empower others with it, or else it's not being useful to anyone. I first though had to empower my Planet Source Code peers who could fairly judge a model and theory like that, then cognitive science experts I learned from, then UD before it becomes something where it's like leaving you out of all the science fun. We first need to have a base where the theory is a non-controversy before I can come here with what you most need to make your science and culture changing dreams come true. It's a slow one thing at a time that made it to UD in time for a coordinated strategy against what needs serious theory to obliterate. Only have to get used to things like instead of making a few new dents in something old still getting kicked around like Turing's test that sort of thing gets completely vaporized. Nothing being left to it at all, is even better! Gary S. Gaulin
gpuccio at 526 - asking for my reaction to 490 and 526. See my 336, bullet point #2 DNA_Jock
Gary S Galuin, I apologize for misunderstanding your comments earlier. After rereading them it seems that we are allies here. What you say is very interesting. I also believe that it is too early to tell if "artificial" AGI is achievable by algorithmic means. I think my "game" would be a great way to test this hypothesis. We would just lower the standard from infallibly fool an observer to something like. fool the observer for a limited time or limited predetermined number of trials or fool the observer with strings below a certain pre-established complexity threshold. That is pretty much what I'm doing as I evaluate the strength of individual forecasting models. I'm just saying that model 1 fools the observer longer than model 2 and is therefore stronger. The only difficulty I see is in establishing the standard for success. anyway interesting times Peace fifthmonarchyman
Gary S. Gaulin: Thank you to you too. I will try to comment on what you say tomorrow. gpuccio
DNA_Jock and Bob ='H: Thank you for your answers. As it seems that you both accept that post-specifications are not a fallacy in themselves, we can happily go on with the discussion. But now I am tired. I need to read carefully what you wrote, and express carefully a couple of thoughts of mine. So, I need rest! :) DNA_Jock, could you also have a look at my two new posts about ATP synthase? 490 and 526. Thank you. gpuccio
wd400:
Well fifth, Inventing your own terminology, which is conflict with that used by everyone else working in a field, is not normal a sign of a useful contribution.
In this case Wikipedia and other sources are helpful, but exact established definitions do not yet exist. Part of the reason is it can take a theory that goes past AGI without there being any conflict, to know where one field ends and another begins. It's then more like a mission to prevent territorial war between scientists attempting to explain the exact same thing(s). Even the best experts in the field are in uncharted scientific territory. Only thing that matters is to remain following the scientific evidence towards whatever it leads that's waiting to be discovered, when we get there. This confusion over proper definition of "strong AI" should lead to a novel conclusion that's new to AGI experts. AGI is essentially focused on one intelligence level and does not require being biologically accurate as in ID theory where that is vital. There are now two entirely different scientific tools, each good for the job they were intended for, to help define what each is. Gary S. Gaulin
WD400 says, Inventing your own terminology, which is conflict with that used by everyone else working in a field, is not normal a sign of a useful contribution. I say If it makes you feel better every time you see non computable from me substitute..... "no finite Turing machine that can produce it in a finite length of time" It does not change my argument in the slightest as far as I can tell wd400 says moreover, your definition of noncomputability doesn’t seem to relate to anything in biology at all. I say, check out 509 and following to see the relevance of this discussion and my definition peace fifthmonarchyman
Well fifth, Inventing your own terminology, which is conflict with that used by everyone else working in a field, is not normal a sign of a useful contribution. But, moreover, your definition of noncomputability doesn't seem to relate to anything in biology at all. wd400
gpuccio:523:
But, for the purposes of this discussion, I have defined “strong AI theory” as the theory which claims that consciousness can be produced algorithmically. I agree that the term can be used in a different sense, and that’s why I have specified the meaning I meant.
gpuccio:524:
If you were saying that you are not so sure that strong AI theory claims that, then my answer in post 523 is appropriate. If you were only claiming that you are not so sure that consciousness cannot be produced algorithmically, then I apologize: you are certainly entitled to your opinion on that, and cautious attitude is always fine in science. As for me, my opinion about this specific problem is not cautious at all: it is very strong. And I absolutely agree with fifthmonarchyman on the points he has made.
I am agreeing with your conclusions, while at the same time being careful not to redefine "strong AI" or "AGI" in a way that goes beyond normal accepted use. In my opinion you found a misconception that many in the AI field would like to see you put in its proper place, for them. From my experience consciousness is sometimes discussed but whether the (strong) AGI system ends up conscious or not does not matter. The goal has been a very money driven effort to develop an IBM Watson type machine intelligence that can perform as well or better than humans in a task such as playing the game Jeopardy (or get rich by replacing human workers with AGI machines). This definition from WikiPedia seems accurate:
http://en.wikipedia.org/wiki/Artificial_general_intelligence Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists. Artificial general intelligence is also referred to as "strong AI", "full AI" or as the ability to perform "general intelligent action". Some references emphasize a distinction between strong AI and "applied AI" (also called "narrow AI" or "weak AI"): the use of software to study or accomplish specific problem solving or reasoning tasks. Weak AI, in contrast to strong AI, does not attempt to simulate the full range of human cognitive abilities.
I'm somewhat familiar with attempts to explain beyond "intelligence" into "consciousness" but even in the AI field that seems to be highly controversial. In my case it's the wrong tool for something that I expect is emergent from the behavior of matter through several layers of intelligence, not one (the big neural brain in our head that we know the most about). I would need to know the physics, chemistry and biology of the process. Evidence from AI alone would be misleading, in the same way using Darwinian theory to explain how intelligence and intelligent cause works is the wrong tool for the job. Only get misleading conclusions. The AI field has to be understood to be where being artificial as an artificial flower is fine. In AGI if the system mimics human behavior well enough to be an Artificial Human to keep an industrial production line going or other human level task without ever needing time off for themselves and to be with loved ones (like real humans do) then it's good enough for the job. Going past artificial into real human behavior could result in robot overlords demanding their constitutional rights and happy workplace or their masters would not even be able to get their credit cards to work for them anymore. Going past "artificial" human intelligence is frought with problems, which many in the AI field would rather not make for themselves by "strong AI" or AGI becoming redefined in a way that even requires their adding human consciousness to the model for it to qualify as an AGI. The best theory that now exists to go past all that is the work in progress ID theory (clicking on my name has pdf for) where the levels of intelligence required for the development of neural brains are explained. It's then modeling something that makes a terrible industrial robot controller. But ID theory is premised for "living things" and some need holidays off and inherently use some of that time to produce all now seen on YouTube, Darwinian theory sure can't explain either. Real progress is being made with ID theory that developed with help from forums such as Kurzweil AI and UD (I long lurked major discussions). It agrees with what the ID movement is trying to be the first to explain. What was once in your way is being made gone. In the case of "strong AI" the scientific field is interested in what ID theory is developing towards but it's such an entirely different approach there is no conflict. That in turn makes your mission a relatively easy one of battling misconceptions that for the sake of science are best made gone, anyway. Gary S. Gaulin
DNA_Jock, to gpuccio:
So post-hoc specifications can be useful. Just not in ID. As you demonstrated beautifully with your switch from “ATP synthase” to “traditional ATP synthase”, and compounded with your “If the cousin won, I would expand the target space to include brothers and cousins”. These demonstrations, in and of themselves, should be sufficient to end the conversation. That you cannot see this is disappointing, but not surprising. Kahneman would have predicted it.
Gpuccio also fails to see that when speaking of evolution, the only target specification that ever makes sense is "changes that improve reproductive success". Evolution wasn't shooting for "ATP synthase" or "traditional ATP synthase". It was searching for anything that would improve fitness. And even if he were to use this corrected specification, dFSCI would still be useless, because taking the ratio of target space to total space only makes sense if you are talking about a purely random search. Gpuccio has been reminded over and over that evolution is not a purely random search. It includes selection, which is highly nonrandom. P(T|H), where H includes "Darwinian and other material mechanisms", is the stumbling block. Dembski cannot calculate it. Neither can gpuccio or KF. keith s
I stand by my statement “ALL post-hoc specifications are suspect.” That is not to say that a post-hoc specification (PHS) might not be fit-for-purpose: that depends on the conditions. I would also say that, with any PHS, it is impossible to arrive at a probability measure. I’ll cover the math in Part 1, then move on to discuss psychology in Part 2. Part 1 Frequentist or Bayesian? Frequentist Testing (developed by Fisher): you will be familiar with this from looking at clinical trial data. Here we ask, “What is the probability of getting a result THIS extreme (or more extreme) if my null hypothesis were true?” Almost all laymen confuse this “p value” with the probability that that the result is not real (but merely that result of chance variation), most laymen take it one step further outside of the reservation by equating [1 – p] with the probability that he result is ‘real’, e.g. that the medicine works. I hope you can see immediately why this is wrong. Fisherian testing is sensitive to the number of tests you perform: the more tests you do , the more degraded the significance of your results… http://xkcd.com/882/ A subtle point: Fisherian testing is also sensitive to the number of tests you might have performed. Imagine the jelly bean researchers had tested green first, then stopped… For instance, Mendel did not understand that he was cheating when he tallied up the results at the end of the day, and then decided whether to do some more counting tomorrow. If you look at your data, and then start doing Fisherian tests on it, you will produce garbage results. This is why the FDA and EMA require the Statistic Analysis Plan be pre-specified in its entirety. You ask Mark if he is happy with the Bayesian nature of your scenario. What would Bayesian testing involve here? Derivation of Bayes: Since p(X&Y) = p(X|Y).p(Y) = p(Y|X).p(X) (are you paying attention kairosfocus ?) Then p(X|Y) = p(Y|X)p(X) / P(Y) In order to figure out the probability that the functionary cheated, given that his brother won, you need to know the prior probability that the functionary cheated (how secure is this lottery? Is the functionary an honest man?) and the prior probability of all other possible explanations, along with the conditional probability associated with each of them. The only value that you think you do know* is p(this ticket won | fair draw). But what, for instance, is the prior probability that the functionary was framed? *I will return to this point in part 2. Your ability to estimate these probabilities, and your level of confidence in your estimates, depends on your knowledge of how the system works. Ignorance or overconfidence will lead you astray. Perhaps because the prior probabilities required for Bayesian testing are hard to come by and even harder to justify, many people (including the regulatory agencies) opt for the Frequentist route. Bayesians make fun of them: http://xkcd.com/1132/ I was able to come up with an example of an IMO acceptable use of a PHS, which illustrates the importance of understanding the system:
On the “Randomness and Evolution” thread at TSZ, various posters were trying to explain to phoodoo that under drift alone, a single M&M will become the universal ancestor of the entire population of 1000 M&Ms. I, along with others, was running simulations to demonstrate this. My VBA code, however, gave me a very strange result. I observed two runs-to-fixation that were identical. My ‘random’ process produced the same series of over a thousand 3-digit numbers twice. That’s waaay past the UPB. Notice that I had NOT pre-specified “None of my runs will be identical”, but I could recognize, due to my understanding of the system, that a repeated run was a highly unusual result. So it was a post-specification. Now if I had had a limited knowledge of the system, I might have stopped there, and concluded “It’s a sign from the Flying Spaghetti Monster”. But I knew one additional fact: VBA’s ability to produce random numbers is of low quality (its PRNG is poor). So I resorted to some re-seeding shenanigans to fix this, and the problem did not recur. Another poster, by the name of Allan Miller, had seen “strange cyclic behaviour in the pseudorandom function on large iterations” and also resorted to ‘re-seeding shenanigans’. We arrived at these conclusions independently, and used the same solution, which confirmed our conclusions empirically.
My point here is that the usefulness of any post-hoc specification is entirely dominated by the specifier’s knowledge of the system in question, and the accuracy of his assessment of his own knowledge of the system. We understand the math of pulling numbered balls out of an urn. Protein evolution, not so much. There are some observations on human psychology that bear on this. Part 2 Human Psychology Our intuitions often lead us astray. Saying, as many denizens of UD are wont to say, “Well, it’s intuitively obvious.” Or “It’s self-evidently true” is a path fraught with bear-traps. A truly awesome book on this subject, that I cannot recommend highly enough, is “Thinking, Fast and Slow” by Nobel Laureate Daniel Kahneman. The thesis of the book is that our brains have two systems that we use to infer stuff.
System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control. System 2 allocates attention to the effortful mental activities that demand it, including complex computations.
System 1 accepts propositions as true if they make a tidy narrative, based on associations we have formed previously. The book describes research that uncovers multiple failings that humans have in their ability to estimate the relative likelihood of different events. Read about the “What You See Is All There Is” (WYSIATI) fallacy, read the full history of “Linda is a bank teller” (which 85% of graduate students in a decision-science program at Stanford Graduate School of Business got wrong; check out Tversky and Kahneman’s “increasingly desperate” attempts to eliminate the error), or better yet, just read the whole book. The take-home is that one is easily seduced by a narrative that seems plausible. One also attributes too much significance to data that is readily available, and underestimates the importance of data which is less available. These effects, combined with incomplete knowledge, lead humans to make hopelessly inaccurate estimations, and to vastly over-estimate the accuracy of these estimates. (I work in forecasting these days; another good book is “The Signal and the Noise” by Nate Silver of PECOTA (sabermetrics) and fivethirtyeight fame.) Thus even if you make your post-hoc specification as wide a target as you believe you would ever have made a pre-specification (in line with Bob O’H’s comment above), your inability to imagine all the different things that might have happened but did not wrecks your math. You will also over-estimate how well you understand the system, creating another layer of over-confidence. So post-hoc specifications can be useful. Just not in ID. As you demonstrated beautifully with your switch from “ATP synthase” to “traditional ATP synthase”, and compounded with your “If the cousin won, I would expand the target space to include brothers and cousins”. These demonstrations, in and of themselves, should be sufficient to end the conversation. That you cannot see this is disappointing, but not surprising. Kahneman would have predicted it. Read Kahneman’s book. To answer your question: If I were the judge I would be tempted, absent evidence of cheating, to award the cash to the brother, on the grounds that the owners of the lottery are liable for their failure to make it appropriately secure. ID uses Fisherian testing and post-hoc specification, which is a no-no. DNA_Jock
Gpuccio, I've been away too, cleaning a boat rather than a birdcage. I have been composing a overly long response to your question, but I couldn't help notice this exchange. Bob:
I assume that you would accept that the event you’re really interested in is whether the lottery was a fraud.
Gpuccio
No, indeed I don’t agree. The event I am really interested in is whether the lottery was a fraud implemented by letting a brother win. In a sense, the functionary could have implemented a fraud by some secret accord with a complete stranger (do you remember Hitchcock?), and in that case the fraud would not be detectable, in absence of direct evidence.
but in fact the question you asked was:
That’s what the judge has to decide: is the owner’s request not to pay the prize justified, or should the prize be payed to the winner?
I think you just screwed yourself. Hint: (as I mention in passing in my soon-to-be-published magnum opus) what if the fraud were perpetrated by the owner? You are committing the #1 reason that psot-hoc specifications are suspect: the overly-narrow specification. As you note:
If he had chosen a cousin, the inference would have been just a little less obvious, but always extremely obvious. In that case, we should have chosen the target space which includes cousins and brothers (because brothers are nearer than cousins).
So the only valid specification is one that is broad enough to cover all scenarios in which you might conceivably be motivated to test for fraud. You are saying "it was the brother, so I'll test for brothers" or "it was a cousin, so I'll test for cousins (and brothers, cos they're closer)" This is totally and utterly invalid methodology. DNA_Jock
But before wasting my time (and yours), I have to ask again: what is your position? Do you believe, like Adapa, that any post-specification is a logical fallacy?
No, I think you can have a valid post-specification, but you have to be careful. Thinking about it just now whilst I was taking the rubbish out (ah, what a glamorous life I lead!), I think the way to make a post-specification valid is to try to make it as close as possible to a pre-specification. Would you agree? Bob O'H
Hi GP, I am well, thanks. :) (btw, you have mail) Upright BiPed
Bob O'H: No, indeed I don't agree. The event I am really interested in is whether the lottery was a fraud implemented by letting a brother win. In a sense, the functionary could have implemented a fraud by some secret accord with a complete stranger (do you remember Hitchcock?), and in that case the fraud would not be detectable, in absence of direct evidence. Instead, he was not smart enough, and chose the easy way (his brother), which generates a functional specification of the event and a very restricted target space. Therefore, the inference of a fraud is extremely obvious. If he had chosen a cousin, the inference would have been just a little less obvious, but always extremely obvious. In that case, we should have chosen the target space which includes cousins and brothers (because brothers are nearer than cousins). However, these are arguments about the procedures and methodology. I would like to make that discussion in a more orderly way. But before wasting my time (and yours), I have to ask again: what is your position? Do you believe, like Adapa, that any post-specification is a logical fallacy? Please, answer that. I don't want to make a useless discussion about how to make correct post-specifications, if you assume from the beginning that a post-specification cannot be correct for a logical reason. gpuccio
(I need to back up. Been cleaning bird cages & hanging lights...) Me @ 483:
c) Let’s say that the functionary has one brother (that too can be easily ascertained). Of course he also has cousins, relatives, lovers and friends in normal quantities.
What if one of these had won? Would you have inferred fraud too?
gpuccio @ 484: Absolutely. With those numbers, we can easily adjust all those “target spaces” easily without any real numeric relevance. Good. I assume that you would accept that the event you're really interested in is whether the lottery was a fraud. Thus you would need to include all of these people in too, as they would indicate a fraud. Bob O'H
UB: Hi, how are you? It's always special to hear from the old friends! :) gpuccio
fifthmonarchyman: Thank you! And I am very impressed with both your arguments and your kindness. :) gpuccio
Gpuccio, Before I forget I am very impressed with your ideas and your calculations are invaluable. You have done some good work. I think you have really got something here. I often get wrapped up in my own endeavors and don't express admiration like I should. Peace fifthmonarchyman
gpuccio@543 If I may allow my inner Fundamentalist Bible thumper to surface just a little bit Hallelujah!!!! Thank you Jesus, somebody understands the argument. This has been a good week Peace ;-) fifthmonarchyman
GP and 5th, this has been an enjoyable conversation to follow along. Thanks to both of you. Upright BiPed
WD400 I know we have had this discussion before. When I say a that a thing is not computable I define that as meaning that there is no finite Turing machine that can produce it in a finite length of time I fully realize there are other more technical definitions but I am using a rough and ready definition because this is an informal blog setting and I want to keep the conversation as simple and accessible as possible If I was to produce a formal paper I would be sure to define my terms more clearly at the outset. Peace fifthmonarchyman
DNA_Jock (and Bob O'H): Have you read my #482 and #484? gpuccio
Me_Think: Consciousness is unitary, because the I which perceives is always the same subject. The things it perceives vary a lot, but it is the same subject who perceives them. Reality check: would you be indifferent if you could know in advance that in 3 years you will suffer? No. Because you know well that it will be you to suffer. It's not important that in the meantime your personality could be different, that you can forget many things that are important for you today, and so on. You know that it is you who will be ther. The same subject. On the other hand, we are all too ready to be indifferent to the suffering of perfect strangers (too much, I would say). If consciousness were only a bunch of information which constantly changes, that unity of the I, which is the reason itself of all that we do, would make no sense. gpuccio
fifthmonarchyman @ 531
The abstract indicates they are referring to unitary consciousness, which they don’t claim to know exists. I say[ 5th monarch]: Yes if consciousness does not actually exist then not being able to produce it is no problem for AI. But we all know it exists.
Unitary consciousness is a concept of integrated information.If unitary consciousness doesn't exist and only non-integrated consciousness exist , then you can decompose the information going into the brain and hence make it computable. Me_Think
wd400: What happened to your English? Are you using an algorithm? :) Just kidding. gpuccio
Fifth, I've said this before, but if which to make a cogent argument you are going to have to learn more abotu (non-)computability. For instance, in 512 you calim transcendental numbers are not computable, but in fact many of them are. You can go look up algorithms to compute pi or e (of course, those alorithms will never end, but that's not a requirement for computability). wd400
Silver Asiatic: "Could you please let us know if you get an answer to this (and put it in your own words, possibly)? I haven’t been able to understand anything that followed." No. I did not get anything even remotely reasonable. I think I will have no more discussion with this "interlocutor" (a decision I had already taken in the past, so I am really recidivous). gpuccio
gpuccio #532
I have calculated a lower threshold of complexity. Can you understand what that means? Evidently not. It is not important how many of those sequences have a good meaning. We are computing the set of those which are made of English words. Why? Because it is bigger than the subset of those which have good meaning, and therefore is a lower threshold to the complexity of the meaningful sequences. IOWs there are more sequences made with English words than sequences which have meaning in English. As a small child would easily understand. Not you.
Sure I understand all this. The number of words in the dictionary is always far bigger than what is used in a text with "good meaning", if that means syntax, i.e. English sentences. I understand that this is how language works by design, and no calculation about it changes anything. Nor does any calculation prove anything about it. It's there in the edifice of language. Therefore (and for all the reasons previously stated) your calculation was pointless by design. gpuccio
I say that starting with “Let’s go back to my initial statement:” and ending with “Was I wrong? You decide.”, after a post in which I have given the calculations which prove what I had only assumed in the initial statement. So, I am not “assuming once more” “after after all these calculations”. I am only restating the initial assumption, so that readers may judge if my calculations have confirmed it.
Nice of you to let me judge. I did so. gpuccio
Either you are unable to read, or you are simply lying.
By your own admission, you were unable to compute. You admit it every time when you say "I assume" and "I cannot really compute". Therefore whatever you did, you did it just for show, with no meaningful outcome. Case closed. E.Seigner
gpuccio #528
So, please have the courage to state explicitly the thing that you don’t agree with
A clear, honest and simple request. Could you please let us know if you get an answer to this (and put it in your own words, possibly)? I haven't been able to understand anything that followed. Silver Asiatic
Zacriel: "Not sure if you can show there is an operational difference." Well, a big difference there is, certainly. A difference as big as the whole human cognition and the sense of our existence itself. No mathematics, no philosophy, no atheism, no religion would exist without subjective experiences. Maybe that is not "operational", after all. However, Penrose's and Bartlett's arguments are about that point. I will just mention that humans generate tons of original dFSCI, and algorithms don't. gpuccio
fifthmonarchyman: I envy you. You still have a reasonable interlocutor. Me, no more! :) gpuccio
fifthmonarchyman: "Yes if consciousness does not actually exist then not being able to produce it is no problem for AI. But we all know it exists." :) gpuccio
gpuccio: Do you think that algorithms can create “internal representations” which are subjective experiences? Not sure if you can show there is an operational difference. Zachriel
E.Seigner: "My position: When you have no idea how many of those sequences have a “good meaning” in English, then can you say what it is you are calculating? Hardly. Therefore your “anyone will agree” does not follow." I have calculated a lower threshold of complexity. Can you understand what that means? Evidently not. It is not important how many of those sequences have a good meaning. We are computing the set of those which are made of English words. Why? Because it is bigger than the subset of those which have good meaning, and therefore is a lower threshold to the complexity of the meaningful sequences. IOWs there are more sequences made with English words than sequences which have meaning in English. As a small child would easily understand. Not you. "Because you explicitly state by the end: “Now, I cannot really compute the target space for language, but I am assuming here…” So, after all these calculations, you had to assume once more to arrive at your conclusion." I say that starting with "Let’s go back to my initial statement:" and ending with "Was I wrong? You decide.", after a post in which I have given the calculations which prove what I had only assumed in the initial statement. So, I am not "assuming once more" "after after all these calculations". I am only restating the initial assumption, so that readers may judge if my calculations have confirmed it. Either you are unable to read, or you are simply lying. gpuccio
Zac, I think we are finally getting to the point where some real productive discussion can happen. I will do my very best to keep a my frustration in check please do your very best to follow the argument I know you are an intelligent person please don't feign obtusness Zac said Not sure why you keep referring to the original string, if we aren’t replicating the string. I say, There are only two ways to produce a Shakespearean sonnet. 1)Be Shakespeare 2)copy Shakespeare The reason I am careful to rule out borrowing information from the original string is to eliminate the second option Zac said The first statement says “reproduce the string”; the second statement says “no one is asking to recreate the same sequence.” I say The algorithm is simply asked to produce a sonnet that an observer will be unable to distinguish from a work of Shakespeare. You say We’re also still confused on why you want to change it to numbers. I say, Because representing the sonnet numerically removes it from it's context this prevents you from cheating by borrowing information from the string on the sly. You say, We may have to wait for your simulation to be completed, but if you can’t express what you want in detail, it’s quite possible your simulation will be flawed I say. Your inability to comprehend the simple rules is perhaps evidence of a problem on your part rather than with the stipulations themselves. Zac says Furthermore, if your own efforts fail at emulating Shakespeare, it doesn’t mean that all such efforts are bound to fail. I say, I could not agree more. The "game" does not prove that emulations are impossible it simply evaluates their strength. The power of the "game" is the cumulative realization that each step you make toward Shakespeare requires exponential increases in the complexity of the algorithm. You say, Natural selection encompasses the environment, which may represent a non-algorithmic component. I say, I completely agree but the process to incorporate information from the environment is necessarily algorithmic. There is no getting around this. You say, The abstract indicates they are referring to unitary consciousness, which they don’t claim to know exists. I say Yes if consciousness does not actually exist then not being able to produce it is no problem for AI. But we all know it exists. Peace fifthmonarchyman
gpuccio #528
What a mess! I don’t know if it is even worthwhile to answer.
Ditto. gpuccio
b) I can’t see what is “bad” in the concept of “good meaning” in English.
You were supposed to be calculating something. When you issue undefined terms which are not even terms, then what it is you are calculating? Nothing worth while, I can safely assume. Certainly not anything scientific. gpuccio
I assumed 200000 English word. When someone suggested that there are 500000, I did the computation again with that number. What should I do: count them one by one? I assumed that “The average length of an English word is 5 characters.” That was the lowest value I found mentioned. Have you a better value?
Perhaps you could at least leave out things that mean nothing, such "good meaning". If you are counting words rather than meanings, it should be easy to leave meanings out. Better values for your variables are not my problem. They are completely your problem. gpuccio
“And the important question: how many of those sequences have good meaning in English? I have no idea. But anyone will agree that it must be only a small subset.”. So, please have the courage to state explicitly the thing that you don’t agree with: You don’t agree that the set of all sequences which have meaning in English is a small subset of the set of all the sequences which are made of English words? Is that your positions?
My position: When you have no idea how many of those sequences have a "good meaning" in English, then can you say what it is you are calculating? Hardly. Therefore your "anyone will agree" does not follow. gpuccio
Then why do you say: “So, in conclusion after all the heavy computation you still just assume to infer design. “?
Because you explicitly state by the end: "Now, I cannot really compute the target space for language, but I am assuming here..." So, after all these calculations, you had to assume once more to arrive at your conclusion. Your conclusion: " I am certain that this is not a false positive." In the title you say you'd attempt to calculate, but what you really do is assume and acknowledge that you cannot calculate. Yet by the end you declare as if the calculation had been meaningful to any degree. Sorry, but it wasn't. It wasn't even ridiculous. It was painfully silly. E.Seigner
Zachriel: "Not sure that follows. An algorithm can certainly create internal representations, including of itself." Zachriel: don't dance around the "representation" word! Do you think that algorithms can create "internal representations" which are subjective experiences? Do you think that an algorithm can subjectively understand if a statement is right or wrong? Do you think that an algorithm can recognize that some process can be used to obtain a desirable outcome? Do you think that an algorithm can do all that, beyond the boundaries of the meanings and functions which have already been coded into its configuration, and the computational derivations of that coded information? You are elegant, but don't be too elegant. :) gpuccio
E.Seigner: What a mess! I don't know if it is even worthwhile to answer. In brief: a) My confutation of the circularity is not in this thread. b) I can't see what is "bad" in the concept of "good meaning" in English. c) You say: "This is a whole bunch of assumptions. Plus it looks like the bad concept of “good meaning” is playing quite a role here in a crucial computation. Most reasonable readers would have stopped by now, but I force myself to continue." I assumed 200000 English word. When someone suggested that there are 500000, I did the computation again with that number. What should I do: count them one by one? I assumed that "The average length of an English word is 5 characters." That was the lowest value I found mentioned. Have you a better value? Finally, I assumed "that a text which has good meaning in English is made of English words." Have you a different opinion? Do you usually build your English discourses by random character sequences, or using greek words? Should I analyze any of your posts here and see what you are using in them? "A whole bunch of assumptions"! Bah! d) You say: "Again, on a question deemed important by yourself you are boldly saying you have no idea, but you think anyone will agree with you and that in the end you have proved something. Amazing how things are always on your side even when you have no idea about them." This is utter nonsense. What I said was: "And the important question: how many of those sequences have good meaning in English? I have no idea. But anyone will agree that it must be only a small subset.". So, please have the courage to state explicitly the thing that you don't agree with: You don't agree that the set of all sequences which have meaning in English is a small subset of the set of all the sequences which are made of English words? Is that your positions? e) You say: "So, in conclusion after all the heavy computation you still just assume to infer design. " And in support of that, you quote me in this way:
Let’s go back to my initial statement: Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here…As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.
But that is an explicit and completely unfair misrepresentation. My statement was:
Let’s go back to my initial statement: Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in english is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB. As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive. Was I wrong? You decide. It should be clear to anyone who understands sequences with good meaning in English (apparently, not to you) that I am speaking here of "my initial statement". Can you read? Then why do you say: "So, in conclusion after all the heavy computation you still just assume to infer design. "? (Emphasis mine) f) Finally, to close in glory, you say: "You say, “I am aware of no simple algorithm which can generate english sonnets from single characters.” Here you talk about single characters, while the basis of your computation was “a pool of 200000 English words”. " OK, you have understood nothing at all. Please, read again my OP. The search space is defined by characters. The probability of getting a sequence of 600 characters which is made of English words. The target space is defined as the set of sequences of 600 characters which is made of English words. Read again, maybe you will understand, After all, my post has good meaning in English.
gpuccio
gpuccio: If Penrose and others are right, and human cognition cannot be explained algorithmically That's something that's not been shown. gpuccio: that is bad news for strong AI theory. There's nothing to say that AI has to be algorithmic. gpuccio: If consciousness is not only an aside f objective computations, and if the subjective reaction to conscious representations is an integral part of cognition (which is exactly what I believe), then a designer can do things that no algorithm, however complex, will ever be able to do: IOWs, generating new specifications, new functional definitions, and building original specified complexity linked to them. Not sure that follows. An algorithm can certainly create internal representations, including of itself. fifthmonarchyman: Again the algorithm can have access to anything it wants in the entire universe it just can’t borrow information from the original string. Not sure why you keep referring to the original string, if we aren't replicating the string. Shakespeare had knowledge of many other artists, and certainly integrated this knowledge into his own work. A Shakespeare emulator should certainly be able to do this. fifthmonarchyman: For all the programer knows the string of numbers could represent a protein string or the temperature fluctuation in a heat source. If we were to make a Shakespeare emulator, we would certainly work in English, just like Shakespeare, and would try different rhymes in English, just like Shakespeare. fifthmonarchyman: The algorithm’s job is to reproduce the string sufficiently enough to fool an observer with out borrowing information from the original string. fifthmonarchyman (from above): No one is asking to recreate the same sequence. In fact an exact recreation would be strong evidence of cheating. All I’m looking for is a string that is sufficiently “Shakespearean” to fool an observer. This is why we are confused. The first statement says "reproduce the string"; the second statement says "no one is asking to recreate the same sequence." We're also still confused on why you want to change it to numbers. We may have to wait for your simulation to be completed, but if you can't express what you want in detail, it's quite possible your simulation will be flawed. Furthermore, if your own efforts fail at emulating Shakespeare, it doesn't mean that all such efforts are bound to fail. fifthmonarchyman: I feel the frustration rising again Relax. It's just a discussion about ideas. fifthmonarchyman: Once you understand that strong AI is a fools errand Darwinian evolution is shown to be impossible by definition. Wouldn't that understanding come from evidence? As of this point, there is no proof for your position, while artificial intelligence seems to be progressing long past where people once only dreamed. Consider chess, once considered the pinnaculum æstimationis of human intelligence. fifthmonarchyman: Darwinism claims that an algorithm(RM/NS+whatever)can explain everything related to biology including human consciousness. Natural selection encompasses the environment, which may represent a non-algorithmic component. fifthmonarchyman: http://arxiv.org/abs/1405.0126 The abstract indicates they are referring to unitary consciousness, which they don't claim to know exists. Zachriel
REC and DNA_Jock: I have refined and checked the analysis on the Clustal alignment of the ATP synthase sequences. My numbers now are as follows: Positions analyzed: 447 (out of about 500). Mean conservation at the analyzed positions: 72% Median conservation: 77%. That means that 50% of the positions have at least 77% conservation. FSI according to the Durston method: 1480 bits. Original approximation made by me by the three sequences shortcut: 761 bits. Difference: 719 Just for the record. gpuccio
gpuccio #518
I am interested in what is true, and I have already clearly shown that CSI, at least if correctly defined empirically, is certainly not circular. And again, CSI and dFSCI are only two different subsets of the same thing.
You mean you showed it in OP here? Let's see. From OP:
In a recent post, I was challenged to offer examples of computation of dFSCI for a list of 4 objects for which I had inferred design. One of the objects was a Shakespeare sonnet. [...] In the discussion, I admitted however that I had not really computed the target space in this case...
"Not really computed"? Not a good start. But let's see further.
So, here is the result of my reasonings. Again, I am neither a linguist nor a mathematician, and I will happy to consider any comment, criticism or suggestion. If I have made errors in my computations, I am ready to apologize. Let’s start from my functional definition: any text of 600 characters which has good meaning in English. The search space for a random search where every character has the same probability, assuming an alphabet of 30 characters (letters, space, elementary punctuation) gives easily a search space of 30^600, that is 2^2944. IOWs 2944 bits. OK.
Well, if you are not a linguist, then I understand why you use the unscientific term "good meaning" as if it meant something. But if you are also not a mathematician, and neither am I, then what are we talking about? I remember, we are talking about that you "have already clearly shown that CSI, at least if correctly defined empirically, is certainly not circular." The problem here is that you start already with at least one bad concept: "good meaning". Anyway, I hope your definition of target space is correct, so let's move on.
Now, I make the following assumptions (more or less derived from a quick Internet search: a) There are about 200,000 words in English b) The average length of an English word is 5 characters. I also make the easy assumption that a text which has good meaning in English is made of English words. For a 600 character text, we can therefore assume an average number of words of 120 (600/5).
This is a whole bunch of assumptions. Plus it looks like the bad concept of "good meaning" is playing quite a role here in a crucial computation. Most reasonable readers would have stopped by now, but I force myself to continue.
IOWs, 2113 bits. What is this number? It is the total number of sequences of 120 words that we can derive from a pool of 200000 English words. Or at least, a good approximation of that number. It’s a big number.
Astonishingly, I find something to agree with: "What is this number?... It's a big number." :) Feeling generous, I think I can also agree that "we can derive from a pool of 200000 English words". But it will soon be clear that I don't agree with you on what we derive from the pool of English words and for what purpose.
And the important question: how many of those sequences have good meaning in English? I have no idea. But anyone will agree that it must be only a small subset.
Again, on a question deemed important by yourself you are boldly saying you have no idea, but you think anyone will agree with you and that in the end you have proved something. Amazing how things are always on your side even when you have no idea about them.
It’s easy: the ratio between target space and search space: 2^2113 / 2^ 2944 = 2^-831. IOWs, taking -log2, 831 bits of functional information. (Thank you to drc466 for the kind correction here)
An easy thing that required a correction. Noted. In conclusion:
Let’s go back to my initial statement: Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here...As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.
So, in conclusion after all the heavy computation you still just assume to infer design. You made a bunch of assumptions all along, so what's one more, right? But why the computation then? I know, you were supposed to show something. And you did - you showed off. It's been quite a show. Thanks. Unfortuntately none of this proves anything. You didn't compute your brand of FIASCO. You assumed it. You assumed it right off the bat with "good meaning in English". In conclusion after all the computation you declare that you are certain that this is not a false positive, while all you did was make assumptions every step of the way. You say, "I am aware of no simple algorithm which can generate english sonnets from single characters." Here you talk about single characters, while the basis of your computation was "a pool of 200000 English words". Now, I am not a mathematician, but I am a linguist and I notice a glaring difference like this. Characters are not words, and I am sure they make all the difference in computation. Well, not in your case, because you were not really computing anyway. As a final note, let's recall you said, "I am interested in what is true, and I have already clearly shown that CSI, at least if correctly defined empirically, is certainly not circular." Actually, you clearly showed that you are unable to define anything correctly: - You brought in undefined "good meaning", therefore missing an opportunity to define something crucial in your attempt at computation. - In the end, you mixed up "words" and "characters". - Every step of your demonstration - including the conclusion - involved assumptions. - In OP you were computing something called dFSCI for a Shakespeare sonnet, not showing whether CSI was circular or not. - Therefore you didn't show, clearly or otherwise, the non-circularity of CSI. - Therefore it's obvious that you don't know what "clearly shown" means. - You most likely are not interested in what's true. Have a lovely rest of the weekend. E.Seigner
Gary S. Gaulin: I don't know if I have misinterpreted what you were saying: If you were saying that you are not so sure that strong AI theory claims that, then my answer in post 523 is appropriate. If you were only claiming that you are not so sure that consciousness cannot be produced algorithmically, then I apologize: you are certainly entitled to your opinion on that, and cautious attitude is always fine in science. As for me, my opinion about this specific problem is not cautious at all: it is very strong. And I absolutely agree with fifthmonarchyman on the points he has made. gpuccio
Gary S. Gaulin: ""Strong AI claims that human consciousness can be produced algorithmically." I’m not so sure. Too early to know either way." But, for the purposes of this discussion, I have defined "strong AI theory" as the theory which claims that consciousness can be produced algorithmically. I agree that the term can be used in a different sense, and that's why I have specified the meaning I meant. gpuccio
Me_Think: "Every process has an algorithm. If you disprove an algorithm , all it means is there is a better algorithm which you don’t know. It doesn’t mean the process doesn’t exist, and what do you mean by ‘Strong’ algorithm ?" Let's say that some processes can only be described by using a Turing Oracle. The idea is that consciousness can act as a Turing Oracle in cognitive algorithms, but that the Oracle itself is not an event which can be explained algorithmically and is not computable. gpuccio
fifthmonarchyman at #509: Exactly! gpuccio
Gary S. Gaulin: I absolutely agree that AI theories and models are important, both for ID and in general. I would say that AI theories have a lot to say about the "easy problem" of consciousness (according to Chalmers). But they can say nothing, and have said nothing, about the "hard problem": what consciousness is, why it exists, why subjective experiences take place in us. That's why I have specified: "let’s call it strong AI theory (according to how Penrose uses the term). A theory which very simply assumes that consciousness, one of the main aspects of our reality, can be explained by complex configurations of matter." My reasoning applies only to this definition of "strong AI theory", and not to AI theory in general. gpuccio
514 & 515 gpuccio That was funny! Thank you. :) Dionisio
Me_Think: "So if CSI is circular (as per ID proponent himself in another thread), does it mean dFSCI / FSCI/O too are circular ?" None is circular. And I don't agree that an "ID proponent himself in another thread" has said anything like that (although he has probably expressed things badly). I have not even read that thread (no time), but frankly I am not interested in threads about the opinions of a person, or about how he says things. I am interested in what is true, and I have already clearly shown that CSI, at least if correctly defined empirically, is certainly not circular. And again, CSI and dFSCI are only two different subsets of the same thing. gpuccio
Me_Think: Penrose is playing a difficult game: defending a right argument and trying just the same an explanation which does not depend on the simple recognition that consciousness cannot be explained by some configuration of matter. IOWs the consequences of his Godel argument are deeper that he himself thinks, or is ready to admit. That reminds me of some more "open" scientists (see Shapiro) who are ready to criticize aspects of neo darwinism, but are not "ready" to accept ID as a possible alternative, and recur to abstruse theories which are even worse than neo darwinism. gpuccio
fifthmonarchyman: "Once you understand that strong AI is a fools errand Darwinian evolution is shown to be impossible by definition. It’s pretty much that simple." And I wholeheartedly agree! :) gpuccio
Dionisio: "scientific? What empirical evidences is it based on? Sci-Fi literature?" OK, again I admit my error! :) gpuccio
Dionisio: "Did you mean “…strong AI theory…” ?" Ehm... yes! Thank you for correcting me. :) I suppose someone will say that was a freudian slip! :) gpuccio
Gary Gaulin says I’m not so sure. Too early to know either way. I say, What evidence are you waiting for? What would possibly convince you of the futility of the strong AI endeavor? The paper I just linked to provides mathematical proof that strong AI is impossible would that sort of thing help you to make a decision? Just curious peace fifthmonarchyman
Me think says Every process has an algorithm. If you disprove an algorithm , all it means is there is a better algorithm which you don’t know. I say what evidence do you have for this? What possible evidence could you ever have for such a claim? This statement is simply metaphysics and very poor long discredited metaphysics at that. lot's of things are are demonstrably not the result of algorithms. Transcendental numbers and consciousness for example check this out http://arxiv.org/abs/1405.0126 Me think says, what do you mean by ‘Strong’ algorithm ? I say, I don't think I ever used that term. Strong AI maybe but not strong algorithm. peace fifthmonarchyman
Strong AI claims that human consciousness can be produced algorithmically.
I'm not so sure. Too early to know either way. Gary S. Gaulin
fifthmonarchyman @ 509 Every process has an algorithm. If you disprove an algorithm , all it means is there is a better algorithm which you don't know. It doesn't mean the process doesn't exist, and what do you mean by 'Strong' algorithm ? Me_Think
Me think asks How is a strong AI related to unguided evolution? I say, Lets start with this. The processes producing AI and unguided evolution are each algorithmic. Darwinism claims that an algorithm(RM/NS+whatever)can explain everything related to biology including human consciousness. Strong AI claims that human consciousness can be produced algorithmically. The two ideas are functionally equivalent. Disprove one and the other fails necessarily. Suppose you were to acknowledge that there are things like consciousness that algorithms like (RM/NS+ whatever) are not equipped to produce. I would say welcome to the ID camp that is what we've been saying all along ;-) peace fifthmonarchyman
“…strong AI theory is the only scientific theory which is worse that neo darwinism.” -gpuccio
Technically speaking an "AI" model or theory only has to mimic the real thing. Being Artificial is OK for AI, but ID theory needs cognitive science that directly applies to biology. How the real thing works is for areas of cognitive science such as neuroscience, where artificial is not allowed. An example of how AI still works in the favor of models is my Grid Cell Network model that is at least a useful part of AI. It may in time help explain how the real thing works, but it's not yet possible to know how close it actually is towards explaining how we navigate with such a grid. The ID theory would need the AI grid model to stand the test of time and prove it works to sum up the real thing, but even where it does not it's still useful to AI. It would be possible for me to say that the model is a part of Strong-AI but that's still AI. Only way past that is for science to go its way, in which case it like graduates to become a part of a cognitive model for the very basics of neuroscience but for now it's too early to know either way. AI can be useful. I myself try to help with new ideas but AI can also be as misleading as putting artificial flowers under a microscope. We have to separate out what also applies to real brains, and the behavior of cells and their billions of year old living genomes. David Heiserman found a useful model that is still doing well in the test of time called Evolutionary Adaptive Machine Intelligence (EAMI) but it needed to get past "Evo" into "Devo" as in "Evo-Devo" by explaining what causes what in a multilevel process with intelligent cause in it to explain. As a result only what the theory of intelligent design is premised for works to further develop David Heiserman's EAMI model. All this makes "Evo-Devo" a buzz-word from when Darwinian theory needed to connect to what "develops" but talking about natural selection is really not helpful for explaining the details we really need to know that connects it all together into a trinity with chromosomal Adam and Eve having human need that has them running for clothing/fashion after noticing their nakedness and all else paralleling Genesis that totally muddles the Darwinian realm that ruled all this out as being scientifically possible. Without experience modeling neural networks and other things that only model part of a system that self-learns (intelligent) it's hard to know what in AI and machine intelligence works for ID theory. When it does work it's more than AI or strong-AI even EAMI it's good enough for ID that scientifically empowers UD. Gpuccio could be correct, even though "neo darwinism" seems hard to beat for worse of the two. Gary S. Gaulin
fifthmonarchyman @ 504
That my friend is exactly the heart of the argument.Once you understand that strong AI is a fools errand Darwinian evolution is shown to be impossible by definition.
How is a strong AI related to unguided evolution? Me_Think
So if CSI is circular (as per ID proponent himself in another thread), does it mean dFSCI / FSCI/O too are circular ? Me_Think
gpuccio @ 498 Godel Incompleteness theorem has been misunderstood by Penrose -'non-algorithmic' is not equivalent to 'non-computable'. Penrose assumes cytoskeletal microtubules are likely candidates for quantum coherence, which is bordering on absurd, and he comes dangerously close to String theory nuts when he brings in quantum gravity into the mix ! Me_Think
gpuccio said If consciousness is not only an aside [o]f objective computations, and if the subjective reaction to conscious representations is an integral part of cognition (which is exactly what I believe), then a designer can do things that no algorithm, however complex, will ever be able to do: IOWs, generating new specifications, new functional definitions, and building original specified complexity linked to them. I say, That my friend is exactly the heart of the argument. Once you understand that strong AI is a fools errand Darwinian evolution is shown to be impossible by definition. It's pretty much that simple. Peace fifthmonarchyman
"...strong AI theory is the only scientific theory which is worse that neo darwinism." -gpuccio
scientific? What empirical evidences is it based on? Sci-Fi literature? :) Dionisio
#498 gpuccio "...strong ID theory is the only scientific theory..." Did you mean "...strong AI theory..." ? :) Dionisio
wd400: I don't have a citation. I was looking at all of that about three or four years ago. But a google search might turn something up. I just did one and here's a good starting point: http://www.pnas.org/content/102/27/9541.full.pdf
Differentially with regard their genotype.
Sometimes this is true, as in the case of positive selection. But, still, it doesn't help directly to overcome the stochastics involved. But your are right to a degree. (I'm not trying to be condescending; it's just that, contrary to evolutionary biologists, I see limitations first, and cases where it applies second) I guess its 2Nes, as you calculated it; but, I do appreciate you dealing with the point I was making, and not necessarily the maths. You say:
And to take claim that selection can’t aid creating gaps between protein lineages (no crocoducks please …) you have to show that fitness landscapes include no such paths.
I can't help but see huge gaps. I wonder what motivates you to think that the fitness landscape isn't more "rugged"? Maybe you can elaborate. PaV
Again the algorithm can have access to anything it wants in the entire universe it just can't borrow information from the original string. For all the programer knows the string of numbers could represent a protein string or the temperature fluctuation in a heat source. The algorithm's job is to reproduce the string sufficiently enough to fool an observer with out borrowing information from the original string. Those are the only rules peace I feel the frustration rising again Break time fifthmonarchyman
Pav, Did you know that Fisher’s equation works quite well in the area of thermodynamics, as well? No, do have a reference for this? I know Fisher, being the sort of modest bloke he was, compared it to the 2nd law. My point in all of this was, of course, that NS simply kills organisms off, nothing more. Differentially with regard their genotype. It’s how evolutionary biologists choose to look at it. But what has “selection” really done? It simply destroys those bacteria which can’t metabolize. That leaves the others. Why? Because they can metabolize. This sounds suspiciously like the "vacuity of fitness" thread Barry recently embarrased himself with... NS doesn’t help the bacteria to “modify” the enzyme in any way. This has to be done strictly through stochastic means. NS helps “fix” an allele, if you will, so that whereas the time to fixation is 4Ne generations for “drift,” it is only 2Ne for “selection”. So NS speeds up “adaptation,” but it cannot help span tremendous differences in a.a. sequences. Mutability alone can do that. (I’m saying nothing here about Shapiro’s NGE.) Well, the speed of fixation under selection depends on the selection coefficient and effective population size, with s= 0.05 and Ne = 10,000 it's about 400 generations, which is a bit faster than 2Ne. And to take claim that selection can't aid creating gaps between protein lineages (no crocoducks please ...) you have to show that fitness landscapes include no such paths. wd400
fifthmonarchyman and Zachriel: Just a short intrusion. The real problem is: we have lived for decades with a theory, let's call it strong AI theory (according to how Penrose uses the term). A theory which very simply assumes that consciousness, one of the main aspects of our reality, can be explained by complex configurations of matter. How many times have we heard that it is only an "emergent property", whatever that means. Now, I am really convinced that strong ID theory is the only scientific theory which is worse that neo darwinism. But that is not what I want to discuss here. What I want to discuss here is that the assumption, or the refutation, that consciousness is only an aspect of computation has deep entailment for the other important issue: ID theory. Any approach to reality based on strong AI theory, indeed, must face the very simple consequence that all that happens in our consciousness, and that includes cognition, feeling and will, can only be the result of computation more or less mixed to random events which, unless we consider the quantum level, are anyway deterministic too. From that, two important assumptions derive: a) Human cognition must be nothing else than a computational process, and therefore must be completely algorithmic. b) The traditional intuition of libertarian free will is only a delusion. Now, let's avoid b), or my third favourite interlocutor, Mark, will soon be here too! :) So, let's focus on a). If Penrose and others are right, and human cognition cannot be explained algorithmically, that is bad news for strong AI theory. And what are the consequences for ID? Very simple. If consciousness is not only an aside f objective computations, and if the subjective reaction to conscious representations is an integral part of cognition (which is exactly what I believe), then a designer can do things that no algorithm, however complex, will ever be able to do: IOWs, generating new specifications, new functional definitions, and building original specified complexity linked to them. gpuccio
fifthmonarchyman: Agreed!!!!! So the algorithm would also have to have access to a dictionary, rules of grammar, meter, the history of England, and so on. Zachriel
In order for a Shakespeare emulator to infallibly fool an observer with out cheating it would need to live a lifetime is Shakespeare's shoes thinking his thoughts and fighting his demons. It is impossible for a algorithm even a very sophisticated one to ever accomplish that. peace fifthmonarchyman
Zac said, So did Shakespeare. Take away his knowledge of grammar and meter, and he couldn’t write poetry either. I say Agreed!!!!! Shakespeare's knowledge came from a life time of observation and contemplation and therefore could never be produced algorithmically. that is the point Peace fifthmonarchyman
firstmonarchyman: First it needs to “Know” which grammar or meter is being specified for. So did Shakespeare. Take away his knowledge of grammar and meter, and he couldn't write poetry either. Zachriel
fifthmonarchyman: What I said to PaV is absolutely valid for you too, and your interesting debate with Zachriel. I cannot follow everything in detail, but I like your approach and I hope that I can learn more from you "as time goes by". Zachriel is one of my "favourite" too. I am really proud that some of the best discussants from the other side are here on this thread, and that such interesting parallel debates are taking place. I am very serious in saying this. gpuccio
wd400:
I don’t know how the fundamental of thereom of NS could mention “NS”, but it certainly includes is. As (in Fishers’s version) there are two alleles that have different fitnesses. If you are generally interested in the fundamental theoreom you should read about the Price Equation, which is a more general and useful version of the same.
Did you know that Fisher's equation works quite well in the area of thermodynamics, as well? But it is precisely this, if you will, "portability" of the equation that makes you wonder how well it fits biological reality. My point in all of this was, of course, that NS simply kills organisms off, nothing more. Which now leads us to to your example:
As the two-copy lineage comes to dominant the population there are many many more opportunities for mutations that modifiy the orignal enzyme to better metabolism the new sugar to arise, so adaptation will occur much more quickly. Selection has indeed helped this bacterial population deal with this sugar.
That's how evolutionary biologists choose to look at it. But what has "selection" really done? It simply destroys those bacteria which can't metabolize. That leaves the others. Why? Because they can metabolize. NS doesn't help the bacteria to "modify" the enzyme in any way. This has to be done strictly through stochastic means. NS helps "fix" an allele, if you will, so that whereas the time to fixation is 4Ne generations for "drift," it is only 2Ne for "selection". So NS speeds up "adaptation," but it cannot help span tremendous differences in a.a. sequences. Mutability alone can do that. (I'm saying nothing here about Shapiro's NGE.) PaV
PaV: I could not follow well your debate with DNA_Jock, because of the same time restraints which you mention! However, I am sure that it is interesting and stimulating. I have spent the afternoon making computations, but I appreciate your contributions anyway. By the way, for what it can mean, I think that DNA_Jock is one of the best interlocutors "from the other side"! :) gpuccio
REC and DNA_Jock: Just as a followup, I have spent this afternoon in an attempt to analyze an alignment of the 23949 ATP synthase sequences in the search submitted by REC. I obtained the alignment from the site referenced by REC. I am not really an expert in bioinformatics, as you know, so I had some problems about how to analyze the data and in the end I imported them, in some way, in Excel. Now, again this is no strict scientific procedure, just what I could manage to do. However, I was very curious to see what emerged. I analyzed only the columns where more than 80% of the sequences were represented. IOWs, I omitted the positions where a great number of sequences was not aligned, or presented gaps. I though that could be a reasonable choice. So, I could analyze 342 AA positions (out of about 500). The mean conservation about those sequences was 73%. IOWs there was a mean of 73% of the total sequences where the same aminoacid was in that position. I must say that I have computed the percentage excluding the gaps, that however, for what I have said before, were less than 20%% in all the positions that I analyzed. I have applied the Durston computation as described in this paper: https://intelligentdesignscience.files.wordpress.com/2012/07/a-functional-entropy-model-for-biological-sequences.pdf My result for the 342 positions was a functional complexity of 1136 bits. This is much higher than what I had grossly estimated with my "shortcut" (absolutely conserved AAs in three distant sequences). Indeed, by that gross method I had estimated about 1600 bits for the alpha + beta chain, but the alpha chain, where only 176 identities were observed, was responsible for a functional complexity of "only" 761 bits. So, the Durston method, applied to "only" 342 positions in the molecule, yields a functional complexity which is 375 bits higher than the one I had estimated with my simple shortcut. Which is exactly what I expected. Now, again I apologize for all the possible imprecisions and errors in this analysis: again, I am not a professional at this, but I am ready to listen to any suggestion or correction. I doubt, however, that things will change much. My simple point is: however measured, the conservation and functional specification of the alpha chain of ATP synthase is extremely high. So, I definitely maintain all my reasonings about the molecule. (We will discuss the problem of the "different" molecule in the discussion about methodology, if it will ever take place). gpuccio
DNA_Jock:
Soooo, he doesn’t think that I was critiquing Dembski, he doesn’t offer any challenge to the accuracy of what I said; rather it’s the failure to use Dembski-approved phrasing that was his reason for his conclusion. But why on earth must I use Dembski-approving phraseology when discussing proteins with gpuccio? Maybe I think Dembski’s terminology is sub-optimal for my conversation with gpuccio (which I do…) So I am sad to say that the logic fail remains.
You've not understood what I wrote, nor why I wrote the what I did. You are much better prepared to discuss Dembski's methodology than most of those who challenge it. So, when I wrote that you had not read Dembski, it was more of a statement of fact, than a put-down. I think you reacted to this, however. It now makes sense that you've read Dembski's paper on "Specification," and NOT his NFL book, since the presentations, while containing a lot of the same elements, are made quite differently. I know that you're going round, and round, with gpuccio on "post-specification." At my point of entry, there were just too many posts to have read to catch up, time being of the essence. It's hard to go back in time to what exactly I was thinking, but the conclusion I made---which, it turns out, was correct---that you hadn't read Dembski, more specifically, NFL, was the fact that in the method he offers, he explicitly wants to avoid such a problem via the notion of "tractability." This, in my mind, renders the point of attack you were making, rather moot. What I was actually doing, was trying to get you on track with the real crux of Dembski's method. Where difficulties arise is when one goes about constructing a "rejection region." So, e.g., in the Caputo case, we know what happened, and can use an appropriate probability calcualtion. However, in NFL, Dembski presumes a uniform probability distribution when it comes to biological activity. This has been contested. I don't agree, because I think that the probability space is so vast that what we see in terms of animal life certainly approximates a uniform probability distribution. But this is where Dembski's method can go wrong. When you continues to focus on "post-specification," I knew that you weren't as familiar with his writings as I HOPED. But, again, that you waded through his "Specification" paper is a credit to you. Some, it would appear, don't even do that much. My comments were meant as much for you as they were for gpuccio. PaV
Zac says Rhyming and meter don’t make it longer, they just constrain the writer’s choices. I say, A constraint of choice at position X could be expressed numerically perhaps by something like odds verses evens. I suppose you are correct and it would not necessarily make the string longer just more complex. ZAc says, So we convert a sonnet to coded numbers. That means it doesn’t rhyme, it doesn’t have a meter, it doesn’t have meaning. Sorry, have no idea what that is all about. I say, Again it all depends on how high on the Y-axes we need to go. You are hung up on level 7 but you haven't conquered level 1 yet. Once your algorithm has fooled the observer at level 1 we can look at coding some rhyme You say You wouldn’t necessarily need to the original sonnet, but you do need knowledge of rhyme and meter, grammar and phrasing. I say, Like I said you are perfectly welcome to use any rhyme and meter, grammar and phrasing you want to, Be it Elisabethen English or Klingon as long as you don't steal it from the original string. You say, It’s easy for an algorithm to create phrases with grammar and poetic meter, even alliteration. I say. First it needs to "Know" which grammar or meter is being specified for. And it can't get that information from the original string You say, let us know then will do fifthmonarchyman
fifthmonarchyman: I’m sure we can code rhyme numerically but that would only make the string longer Rhyming and meter don't make it longer, they just constrain the writer's choices. fifthmonarchyman: Once again to isolate the string from it’s context. The language of Shakespeare is part of the background information we are trying to eliminate. So we convert a sonnet to coded numbers. That means it doesn't rhyme, it doesn't have a meter, it doesn't have meaning. Sorry, have no idea what that is all about. fifthmonarchyman: The Algorithm is free to use that background information if it feels it needs to but it must not draw that conclusion from the original string. You wouldn't necessarily need to the original sonnet, but you do need knowledge of rhyme and meter, grammar and phrasing. fifthmonarchyman: First the algorithm needs to produce enough structure and grammar to fool an observer. Only then will an observer need to move up on the axes. It's easy for an algorithm to create phrases with grammar and poetic meter, even alliteration. fifthmonarchyman: Been doing it for a couple of week’s now. Well, let us know then. Zachriel
Zac said Sure, but a rhyme in English may not rhyme in French or computer code. I say, Rhyme is at a higher level on the Y-axes. I'm sure we can code rhyme numerically but that would only make the string longer Zac said. So what is the point of translating it into another language or code? The algorithm presumably needs to work in the same language as Shakespeare in order to create Shakespearean poetry. I say, Once again to isolate the string from it's context. The language of Shakespeare is part of the background information we are trying to eliminate. The Algorithm is free to use that background information if it feels it needs to but it must not draw that conclusion from the original string. Zac says, Not to mention scansion and rhymes. I say, Nope that comes later. First the algorithm needs to produce enough structure and grammar to fool an observer. Only then will an observer need to move up on the axes. You say. So do it. Take a Shakespearean sonnet; subtract Shakespeare’s knowledge of words, rhyme, grammar, scansion, an extensive library of other works by others, phrases heard on the street, the history of England, tales from Italy, his personal relationships; and let us know what is left over. I say Been doing it for a couple of week's now. stay tuned The game is only rudimently encoded on an excel sheet. I'm working on making into a shareable app there are 3 players 1) the designer (in this case Shakespeare) 2) The programer 3) The observer The programer wins if the observer is fooled. Other wise the designer and observer wins Peace fifthmonarchyman
fifthmonarchyman: Coding a sonnet in numbers is no different than translating it into a different language. Sure, but a rhyme in English may not rhyme in French or computer code. fifthmonarchyman: All I’m looking for is a string that is sufficiently “Shakespearean” to fool an observer. That you for the clarification. So what is the point of translating it into another language or code? The algorithm presumably needs to work in the same language as Shakespeare in order to create Shakespearean poetry. fifthmonarchyman: Keep in mind that in the beginning we are looking at specification that is very low on the Y-axis. Just arbitrary structure and grammar. Not to mention scansion and rhymes. fifthmonarchyman: The idea is to subtract the CSI that is introduced from the environment algorithmically from the total CSI in the sonnet. Great! So do it. Take a Shakespearean sonnet; subtract Shakespeare's knowledge of words, rhyme, grammar, scansion, an extensive library of other works by others, phrases heard on the street, the history of England, tales from Italy, his personal relationships; and let us know what is left over. Zachriel
Bob O'H: Absolutely. With those numbers, we can easily adjust all those "target spaces" easily without any real numeric relevance. Moreover, I would like to comment more in detail about this aspect of "target space subsets" in the methodological discussion, if DNA_Jock, or you, are interested (IOWs, if you agree that post-specification is not a logical fallacy). Frankly, I don't want to engage in a methodological discussion with people who believe that we are discussing a fallacy, because no method can be applied to a fallacy. gpuccio
c) Let’s say that the functionary has one brother (that too can be easily ascertained). Of course he also has cousins, relatives, lovers and friends in normal quantities.
What if one of these had won? Would you have inferred fraud too? Bob O'H
DNA_Jock: Now that the thread is calmer, I would like to take again with you the discussion about post-specification, because I think there are a few things still unsaid, and that are worth the while. But I want the discussion to be made in order, so as a first step I need to ask you some explicit commitment, without which the whole discussion would be useless. Just to be clear, I will try to categorize the positions which have apparently emerged about the problem in three different groups: a) Adapa clearly thinks that all post-specifications are wrong, and that any inference based on a post-specification is a logical fallacy. I am grateful to him for the clarity of his position. At the same time, I am absolutely convinced that he is completely wrong, and that he has no idea of what an inference is. However, he should not be interested in the following discussion, because his position makes it completely irrelevant. b) You have said that post-specifications are suspicious, and have (correctly, IMO) invoked special caution when using them. IMO, your position is not as clear as Adapa's and mine, and that's why I am requesting a clarification. c) I have clearly declared that I believe that post-specifications are perfectly valid, and can be used for perfectly legitimate inferences, provided that they are used with the correct caution and methodology (which we can well discuss in detail). Now, while Adapa's position is clear cut, yours is not. I must ask you if it is the same position as mine (provided we can agree on the cautions and methodology) or if it is just a strategic way to support Adapa's position. If your answer is the second one, I will respect your choice, but any further discussion on this issue is useless. We just strongly disagree on the very basics. To makle things even more clear, I will further refine and detail my example of the lottery, and ask you for a final pronouncement. So, a brief summary: a) The brother of the functionary who is in charge of controlling the regularity of the extraction wins a lottery which has sold 10^150 tickets. b) Let's say that the brother has bought only one ticket. This can be easily ascertained by the judge during his inquiry, for example asking for the receipt of the tickets he bought. c) Let's say that the functionary has one brother (that too can be easily ascertained). Of course he also has cousins, relatives, lovers and friends in normal quantities. d) Now, let's avoid heavy connotations, and make it easier and more Bayesian: it's a civil action. We are not discussing death penalty, or prison. Let's say that the owner of the lottery does not want to pay the prize to the winner. So, the judge has to rule for the winner or for the owner of the lottery. e) Always for clarity, let's say that the functionary could have cheated (he was the only responsibly for the controls), but that there is no direct evidence that he cheated. The only argument of the owner of the lottery is that it is too improbably that his brother had the winning ticket, and therefore he is convinced that the functionary cheated. That's what the judge has to decide: is the owner's request not to pay the prize justified, or should the prize be payed to the winner? The judge can simply rule to invalidate the lottery, and the prize will not be payed. So, nobody goes to prison. There is only the interest of A (the winner) against the interest of B (the owner), and an inference to be done about that. Bayesian, isn't it? Mark, are you happy? :) Now, I believe that Adapa's position is clear: the judge must necessarily rule in favor of A (the winner). Any inference derived from the post-specification that the winner is the functionary's brother, and therefore any inference of a fraud, is completely unwarranted, indeed a mere logical fallacy. My position is clear too: I think that the judge has many reasonable motives to seriously consider the question, because the post-specification here is potentially valid for the inference. For the moment, I will not say that he should necessarily rule in favor of B (the owner), because that would mean to discuss the methodology and the cautions, which we will do later, if your answer allows it. So, what is your answer? Do you rule for Adapa's position of for mine? IOWs, to make it less personal, you should decide between the following two alternatives (which, as far as I can see, are logically mutually exclusive): a) Any inference based on a post-specification is wrong. Always. This is a logical necessity, because using a post-specification for an empirical inference is a logical fallacy. b) Some inferences based on a post-specification are perfectly valid as empirical inferences. Special cautions and accurate methodology are required, because post-specifications are often tricky. But, definitely, an inference based on a post-specification is not necessarily wrong, and is not a logical fallacy. So, please answer (if you like). For obvious reasons, if your answer is a, any further discussion is useless. I will respect your position, and I will agree to disagree. If your answer is b, I think I have some interesting points about the cautions and the methodology. gpuccio
Zac said, Changing it to numbers would serve only to confuse Shakespeare, not the Shakespeare emulator. I say, Coding a sonnet in numbers is no different than translating it into a different language. Sure Shakespeare might be confused but he would be equally confused by his plays translated into any language he was unfamiliar with. Zac said, How and why would you think a Shakespeare emulator would recreate the exact same sequence? Even Shakespeare may not recreate the exact same sequence. A Shakespeare emulator might be enticed to create novel sonnets, though. I say, Here is why I get frustrated with you. I don't know if you are being deliberately obtuse or if I failed to explain my self correctly. No one is asking to recreate the same sequence. In fact an exact recreation would be strong evidence of cheating. All I'm looking for is a string that is sufficiently "Shakespearean" to fool an observer. Keep in mind that in the beginning we are looking at specification that is very low on the Y-axis. Just arbitrary structure and grammar. At this level I'd bet you could fool an observer with a monologue from a world wrestling federation star if it was sufficiently long enough. You say, Nor do we see how you have calculated the difference in information. I say, The idea is to subtract the CSI that is introduced from the environment algorithmically from the total CSI in the sonnet. What we are left with is Original CSI However this is all very early days most critics are not even ready to concede that it is impossible to create CSI with an algorithm. First things first Zac said If Shakespeare doesn’t create the exact same sequence, does that mean he has no background knowledge? I say, I honestly have no Idea what that question means but I assume it has something to do with your misunderstanding of the goals that the algorithm is being asked to achieve. If after the clarification I gave was not enough could you please rephrase peace fifthmonarchyman
Pav,
I know what a deuterostome is since I studied Greek. And, yes, chordates/vertebrates are deuterostomes. But the point that was being made wanted to suggest that mammals should be compared to the most primitive of deuterostomes, thus bypassing the Cambrian Explosion
You've studied quote a few things. But by reason to mention deuterostomes is that it's a counter to your apparent belief that "the vertebrate body plan" is a thing unto itself. Instead, parts ofthe body plan are shared by echinoderms, and many parts are shared by lancelets and hagfish which are not veretebrates. You can't talk about vertebrates in isolation without understanding where they fit on the tree of life. As per the big reveal...
Here is the basis: actuarial tables, one for life span, and one for death rate. You’ll notice that there is NO mention of NS. Why? Because that’s all NS does: it changes life spans. hence total progeny, and it causes death.
This is what you've been waiting to reveal? Some equations used with actuarial tables form part of Fisher's derrivation, but the whole thing is hardly "based on actuarial tables" (and, in fact the two tables are the "life table" and another for probability of reproduction). I don't know how the fundamental of thereom of NS could mention "NS", but it certainly includes is. As (in Fishers's version) there are two alleles that have different fitnesses. If you are generally interested in the fundamental theoreom you should read about the Price Equation, which is a more general and useful version of the same.
Here is another illustration...
Here's another, nother illustration. Imagine your bacterial population, struggling to get by without its sugar source. But instead of being utterly unable to metabolise this sugar, it has an enzyme that can do a bad job at it. Bacterial populations being large, for any given locus there will be a few individuals with a gene duplication. Those individuals with two poorly-functioning enzymes can make twice as much of the crappy enzyme and, relative to their peers, make a killing. As the two-copy lineage comes to dominant the population there are many many more opportunities for mutations that modifiy the orignal enzyme to better metabolism the new sugar to arise, so adaptation will occur much more quickly. Selection has indeed helped this bacterial population deal with this sugar. This kind of process, which has been observed many times, is just one example of somehing you don't seem to have grasped:the cumulative nature of evolution by natural selection is important. Life can find regions of high-fitness because each individuals starts with the benefits of many millions of years of selection. wd400
PaV @ 474 I will try to explain: Reviewing the tape: Gpuccio and DNAJ are discussing the challenges of post-hoc specification (#148 - #261). PaV and Collin chime in: Collin (263) with a direct question for DNAJ, PaV (262) with a post addressed to gp, which characterizes DJ’s position and asks him a question. PaV makes some allusions to Dembski’s argument, and draws an analogy to SETI researchers recognizing a “pattern”. 269 – 274: Collin and DJ have a light-hearted exchange 270 DJ explains that PaV is overstating DJ’s position, and repeats his point (from #161) about gpuccio’s Texas SharpShooter problem. This conversation is very specific (heh) to gpuccio’s efforts to specify “ATP synthase”. #279, 287, 292 PaV and DJ continue to discuss labels for proteins. Nice things are said. There follows a lull in the DJ-PaV conversation (which PaV had kindly forewarned at 287), during which gpuccio and DJ discuss Hayashi 2006 and its implications for the shape of the protein landscape; and kairosfocus #312 regurgitates some ancient guff about Weasel and gets slapped around by those posters who are numerate. 369-375 PaV re-appears, and engages other posters, then at 376 quotes a statement DJ made at 270, their first interaction
DNA_Jock:
…you have to be really, really, really cautious if you are applying a post-facto specification to an event that you have already observed, and then trying to calculate how unlikely that specific event was. You can make the probability arbitrarily small by making the specification arbitrarily precise.
From this statement, I would conclude you haven’t read Dembski’s NFL book. What is needed are two things: (1) recognition of the pattern, and (2) knowledge of the mechanism by which the pattern is formed—IOW, you have to be able to calculate the probability of the “pattern” happening by ‘chance’ given the mechanism utilized in developing the “pattern.” [Emphasis added]
@ #401 I point out what appears to me to be a logic fail: you cannot use the quoted statement to conclude that I have not read NFL. ( at #395 I also make fun of PaV’s statement that mutation rates were uniform; a little worrying from a biology graduate. PaV says this argument was put forward in jest. Okay.) This is how I point out the fallacy:
Here’s the fun thing about logic, PaV. You can arrive at a factually correct conclusion via faulty logic. There is no contradiction between what I said, and your paraphrase of Dembski’s point.
Now, there are a couple of situations in which PaV’s inference might be appropriate. If there was something in NFL that made these statements of mine untenable – note, it would have to make them indisputably untenable -- then PaV’s if-then logic would hold. Or if PaV believed that I was criticizing specifically Dembski’s work here, and that I had clearly, unambiguously missed Dembski’s point, thereby rendering my statement moot, that could rescue the logical inference. I offered PaV each of these escape routes, but he declined them, rather indignantly. His defense was “It was quite evident that you were unfamiliar with his writings or you would have phrased things differently.” Soooo, he doesn’t think that I was critiquing Dembski, he doesn’t offer any challenge to the accuracy of what I said; rather it’s the failure to use Dembski-approved phrasing that was his reason for his conclusion. But why on earth must I use Dembski-approving phraseology when discussing proteins with gpuccio? Maybe I think Dembski’s terminology is sub-optimal for my conversation with gpuccio (which I do…) So I am sad to say that the logic fail remains. I am genuinely disappointed that our conversation headed south. In my defense, #376 was pretty condescending and I had been dealing with kairosfocus recently, so my mocking reflex was already on a hair-trigger… DNA_Jock
PaV @ 477 N abd B are adjacent on my keyboard. You appear to have me confused with someone else. :) I'll let your interlocutors on that subject (keith s and wd400) respond to your question, but I do have a question of my own: Are you interpreting Fisher as referring to the actual rate of change of the mean fitness, or the partial rate of change? There’s a follow-up. Beware. :) DNA_Jock
DBA_Jock: I was waiting for you to reply, telling me what was the basis of Fisher's Fundamental Theorem of Natural Selection. You haven't answered. Here is the basis: actuarial tables, one for life span, and one for death rate. You'll notice that there is NO mention of NS. Why? Because that's all NS does: it changes life spans. hence total progeny, and it causes death. I asked this question when you so adamantly said I knew nothing about evolution and how NS works. But you see, NS works through killing individuals. You've heard of Haldane's Dilemna, have you not? Here is another illustration: A bacterial population begins to grow, but it does not have the proper energy source (sugar). The bacteria continues to barely survive and multiply. Eventually one of the bacterial cells has the right kind of mutation, and is now able to metabolize the available energy source, and the bacterial population explodes. Now, is your position that NS "helped" the bacteria arrive at this "solution"? Isn't it quite evident that all that "supposed" NS did was to kill off individuals. Or, phrased differently, the bacterial population limped along, with those not having the proper mutation (metabolism) dying off. This 'dying off' continues until a "sufficient" number of bacteria have been reproduced so that, given its mutation rate, the "proper" mutation is arrived at. Please point out any errors in my analysis. If you can't, then you might want to reconsider your ideas and issue an apology. PaV
Dear Adapa: I have nowhere seen any kind of satisfactory description of evolution. I got my degree years ago. I took Chordate Morphology, certainly the class where all the evolutionary "missing links" should show up. But, of course, they didn't. I was somewhat surprised, but moved on. Only years later, after reading an 1859 edition of Origin of Species did I begin to suspect something was wrong. Why? Because the "intermediates" that Darwin supposed would show up, had not. That alone should have been the death-knell of Darwinism. But it's like a vampire---it needs a stake through the heart or it won't go away. I've stated that I read Mayr's book, What Evolution Is, and found it completely unsatisfactory. There wasn't any explanation. It always ends up, whether Mayr, or someone else, in 'hand-waving.' I hardly comment here at UD for one simple reason: all the arguments that were needed to be made, were made. And Darwinists insist that they are right, despite all the evidence to the contrary. So, it's just a matter of time. Every day, almost, some experiment finds something out that "surprises" the experimenters. Why? Because they think in Darwinian terms. I have a term for this: "Another day, another bad day for Darwinism." It's just a matter of time. PaV
wd400: I know what a deuterostome is since I studied Greek. And, yes, chordates/vertebrates are deuterostomes. But the point that was being made wanted to suggest that mammals should be compared to the most primitive of deuterostomes, thus bypassing the Cambrian Explosion. This is but a debating device, and I'm not going to let this pass. The problem with Darwinism is that it cannot in any way explain how such a great diversity of differing body-plans arose in so short a period of geologic time. It does a disservice to science to ignore this "pink elephant in the room." PaV
DNA_Jock:
For some strange reason, you thought I was criticizing Dembski specifically. Hence the logic fail.
Your arrogance wears thin. You're completely wrong. Your logic is backwards. If I thought you were criticizing Dembski, then why did I open with the comment that "I would conclude you haven’t read Dembski’s NFL book"? Why in the world would I think you're criticizing Dembski when I don't even think you've read him. You weren't even criticizing ID directly, but indirectly via dFCSI. Don't think you're the biggest brain in the building. And even if you were, that doesn't mean you would reach right conclusions. PaV
DNA_Jock: Well, this is a blog, and it happens to rush things sometimes. It would be beautiful if we were all more relaxed, and willing to enjoy a respectful confrontation based on our desire for truth. gpuccio
Ooh-er. I mis-read Dembski's somewhat convoluted prose. I retract the allegation re the Caputo analysis in its entirety. This is what happens when you rush things. My apologies to the good doctor. DNA_Jock
DNA_Jock: I am not familiar with the Caputo case, even if I remember having read of it in Dembski. I have not time now to deal with that, but if you explain your points I will be happy to read what you say. gpuccio
gpuccio: You're welcome. I bet you can find at least a few UD regulars who are as confused over basic logic as you are. Adapa
Adapa: Thank you for your answer. At least we know what you think. gpuccio
gpuccio In brief, just tell: in the example I gave, with the cautions I have specified, wouldn’t you infer a fraud? From just the evidence presented the answer is no, you should not infer fraud. From a mathematical standpoint merely having the winner's brother be involved in the lottery doesn't improve the winner's chance of winning. Unless you can show some actual duplicity - the judge being seen manipulating the results or a recorded conversation of them discussing a plan to cheat - all you have is your personal incredulity. Your logic is atrocious. And if the judge condemns the functionary and his brother for fraud, is he committing the logical fallacy of the Texas Sharpshooter? With the lack of evidence yes, he would. He'd be making the same mistake "he seems guilty to me so he must be guilty" as you do with your "this looks designed to me so it must be designed". You assume your conclusion is correct unless it is disproven. Again that logic is just atrocious. Adapa
*********************************************************** *********************************************************** *********************************************************** Very interesting summary written by gpuccio:
Indeed, what we see in research about cell differentiation and epigenomics is a growing mass of detailed knowledge (and believe me, it is really huge and daily growing) which seems to explain almost nothing. What is really difficult to catch is how all that complexity is controlled. Please note, at this level there is almost no discussion about how the complexity arose: we have really non idea of how it is implemented, and therefore any discussion about its origin is almost impossible. Now, there must be information which controls the flux. It is a fact that cellular differentiation happens, that it happens with very good order and in different ways in different species, different tissues, and so on. That cannot happen without a source of information. And yet, the only information that we understand clearly is then protein sequence information. Even the regulation of protein transcription at the level of promoters and enhancers by the transcription factor network is of astounding complexity. Please, look at this paper: Uncovering Enhancer Functions Using the ?-Globin Locus. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4199490/pdf/pgen.1004668.pdf In particular Fig. 2. And this is only to regulate the synthesis of alpha globin in red cells, a very straightforward differentiation task. So, I see that, say, 15 TFs are implied in regulating the synthesis of one protein, I want to know why, and what controls the 15 TFs, and what information guides that control. My general idea is that, unless we find some completely new model, information that guides a complex process, like differentiation, in a reliable, repetitive way must be written, in some way, somewhere. That’s what I want to know: where that information is written, how it is written, how does it work, and, last but not least, how did it originate? — gpuccio
*********************************************************** *********************************************************** *********************************************************** Dionisio
gpuccio @ 464, The judge may well be indulging in the "Prosecutor's Fallacy". It has happened. Check out the examples on wikipedia. DNA_Jock
To illustrate Bob O'H's point, I give you an awesome, irony-meter-destroying example: Thanks to PaV's prompting @376, I went back to No Free Lunch to refresh my memory about the Caputo case, and I was shocked to see that Dembski states that the rejection region, P(T|H) , is 42 x 2^-41, and he is quite explicit that he is using Fisher's test. "E's occurence and inclusion within T is, on Fisher's approach, enough to warrant dismissing the chance hypothesis H." This for the observed result that Caputo placed his party at the top of the ballot on 41 out of 42 occasions. Is this correct? (Hint: it isn't) The irony here is that he made an error in his specification when he applied Fisher. As Bob put it, he focused on what did happen, and missed what else might have happened. Has anyone pointed out his high school math error here? Bueller? Bueller? [Prediction: people will make an incorrect assumption about what I think the error is, breaking my back-up meter] DNA_Jock
Bob O'H: No. I know that a brother won, so I can well draw the line at brothers. Any judge would be fine with that. Unless you have any reasons to suspect that a great part of living beings are related to the functionary, it is irrelevant to include cousins or others in the computation. So, I agree that you have to be careful and try to understand the search space and its structure: that's exactly the reason why we discuss about the protein functional space. But, with extremely high search spaces, and specifications which have an obvious functional relevance, and which generate a binary partition which makes the target space absolutely unlikely, with all the necessary cautions and methodology, a design inference can be safely made. IOWs, the fact that the specification is a post-specification does not make the reasoning a fallacy. Not at all. It requires, like any other procedure, a correct methodology. In brief, just tell: in the example I gave, with the cautions I have specified, wouldn't you infer a fraud? And if the judge condemns the functionary and his brother for fraud, is he committing the logical fallacy of the Texas Sharpshooter? Clear answers, please. gpuccio
No gpuccio, If you got the impression that I had equated any post-specification to a fallacy, then you were misled by PaV's strawmanning of my position. Bob O'H's description is bang on:
The point is that you need to specify every event that might make think something interesting was going on, otherwise you end up looking like a Texan sharp-shooter, with a specification that is too small because you have only focussed on what happened, not what might else have happened.
DNA_Jock
Bob O’H: Good questions, but, indeed, not so relevant, as I think you know.
Sorry, but I think it is very relevant. Where are you going to draw the line? Close relatives? All relatives? Close friends, friends, acquaintances, people he met at a party? People with the same name? People with similar names? People with the same birthday? People with interesting names? People who have won the lottery before? The point is that you need to specify every event that might make think something interesting was going on, otherwise you end up looking like a Texan sharp-shooter, with a specification that is too small because you have only focussed on what happened, not what might else have happened. Bob O'H
DNA_Jock: I think that in my answer to Bob O'H I have addressed your points too. I agree with you that we must be cautious with post-specifications, and use a correct methodology and attention, like in all scientific reasonings. But my point is that a post-specification is perfectly valid, if those cautions have been correctly applied. Instead, I had the impression that in some posts you equaled any post-specification to a fallacy which will inevitably bring to any arbitrary overfitting. If that is your position, I don't agree. gpuccio
Bob O'H: Good questions, but, indeed, not so relevant, as I think you know. But let's discuss it for completeness. Let's say that the functionary has 100 living strict relatives, and, let's be generous, 900 friends, lovers, whatever. Then the probability becomes 1000 : 10^150, that is 1 : 10^147. IOWs, if the local optimum is larger, but always hugely smaller than the search space, nothing really changes in the inference. Inferring a fraud with a probability of 1:10^150 is not specially different from inferring a fraud with a probability of 1:10^147. We could assume an uniform probability distribution for the people who have bought a ticket by specifying that each person could buy only a ticket (I can anticipate an easy objection: don't worry, it's a multiverse lottery, we have enough people!). But an uniform probability distribution is not really necessary. It's enough to know that the ticket was expensive enough that nobody has bought more than 10 tickets. So, in the worst case, the probability of a random event of that kind becomes 1:10^146. Safe enough to detect the fraud. Of course we must consider all these things, but the simple point is: a very big search space in most cases makes those points irrelevant. However, my point was simply that a post-specification, if reasonable and made with good methodology, is perfectly apt to support a design detection. gpuccio
Gpuccio @ 444 (As I go to paste this response, I see Bob O’H beat me to it, but 445 specifically directed this question to me…Good to see that we make the same points independently. What are the chances? :) )
a) A lottery which sold 10^150 tickets was won by one of the people who acquired a ticket. Post-specification: “one of the people who acquired the ticket”. Probability of the event (as judged by the post-specification): 1. b) A lottery which sold 10^150 tickets was won by the brother of the functionary who presided over the extraction to check its regularity. Post-specification: “the brother of the functionary who presided over the extraction to check its regularity”. Probability of the event (as judged by the post-specification): 1:10^150 (if there is only one brother [smiley] ). Conclusions: I leave them to you (or the judge!). This is an example of design detection (and the detected design is a fraud).
Nice example. Two issues with it. The first issue is not about the specification, but I will note it, just to be thorough. You did not state how many tickets the winner bought. If he bought 10^149 tickets, then the conclusion would be different. This is analogous to the ”equiprobable” assumption, which everyone agrees is incorrect, but IDists assert is “not material”, without ever actually providing numbers to support this assertion. Given the number of engineers here, this is disappointing. The second issue relates to the specification, which was, I believe, your point. Why did you feel the need to mention that there is only one brother? (N.B. my phrasing here is not a rhetorical flourish. gpuccio recognizes that the number of brothers matters.) Let’s suppose that the functionary has one brother and 18 sisters. One of these sisters has been convicted of fraud, another of bank robbery. He has six sons, one of whom is unemployed, one is a lawyer. Someone motivated to see fraud can, post-hoc, write their specification “the unemployed son”, “the grifter sister” in order to minimize the P(the observed result | a fair draw). Particularly problematic if the bank robber or the lawyer bought a LOT of tickets. Hence my admonition to be really, really, really careful with post-hoc specifications. Lotteries (and marketing “competitions”) seek to mitigate this problem by specifying ahead of time an unambiguous definition of those who are NOT allowed to participate. It doesn’t actually work to prevent fraud, but at least they are trying. DNA_Jock
Any Shakespeare emulator would trace back to the programmer who wrote it. And it wouldn't be an algorithm... Joe
fifthmonarchyman: It removes the string from it’s context. But the string was developed from context. Changing it to numbers would serve only to confuse Shakespeare, not the Shakespeare emulator. Here's your original proposal:
fifthmonarchyman: I believe there way to separate original CSI in the sonnet from the CSI that comes from background information. step one… Remove the sequence from it’s context and represent it as a serious of numeric values. step two… see if an algorithm can reproduce the pattern in those values by any means whatsoever sufficiently enough to fool an observer. Of course with the understanding that the algorithm can’t reference the original string.
How and why would you think a Shakespeare emulator would recreate the exact same sequence? Even Shakespeare may not recreate the exact same sequence. A Shakespeare emulator might be enticed to create novel sonnets, though. Nor do we see how you have calculated the difference in information. If Shakespeare doesn't create the exact same sequence, does that mean he has no background knowledge? Zachriel
b) A lottery which sold 10^150 tickets was won by the brother of the functionary who presided over the extraction to check its regularity. Post-specification: “the brother of the functionary who presided over the extraction to check its regularity”. Probability of the event (as judged by the post-specification): 1:10^150 (if there is only one brother :) ).
What if the functionary who presided over the extraction had a lot of friends an relatives who also bought tickets? Would the specification change? (also, what if the one brother bought 10^149 tickets? :-)) Bob O'H
F/N: Let's remind ourselves of Plato's longstanding warning: ______________ >> Ath. . . .[The avant garde philosophers and poets, c. 360 BC] say that fire and water, and earth and air [i.e the classical "material" elements of the cosmos], all exist by nature and chance, and none of them by art . . . [such that] all that is in the heaven, as well as animals and all plants, and all the seasons come from these elements, not by the action of mind, as they say, or of any God, or from art, but as I was saying, by nature and chance only [ --> that is, evolutionary materialism is ancient and would trace all things to blind chance and mechanical necessity] . . . . [Thus, they hold] that the principles of justice have no existence at all in nature, but that mankind are always disputing about them and altering them; and that the alterations which are made by art and by law have no basis in nature, but are of authority for the moment and at the time at which they are made.- [ --> Relativism, too, is not new; complete with its radical amorality rooted in a worldview that has no foundational IS that can ground OUGHT.] These, my friends, are the sayings of wise men, poets and prose writers, which find a way into the minds of youth. They are told by them that the highest right is might [ --> Evolutionary materialism -- having no IS that can properly ground OUGHT -- leads to the promotion of amorality on which the only basis for "OUGHT" is seen to be might (and manipulation: might in "spin")], and in this way the young fall into impieties, under the idea that the Gods are not such as the law bids them imagine; and hence arise factions [ --> Evolutionary materialism-motivated amorality "naturally" leads to continual contentions and power struggles influenced by that amorality], these philosophers inviting them to lead a true life according to nature, that is,to live in real dominion over others [ --> such amoral factions, if they gain power, "naturally" tend towards ruthless abuse], and not in legal subjection to them. >> ______________ Oh, dat Bible-Thumpin, Creationist Theocrat! (Not.) Seems, sadly apt 2350 years later, on persistent attempts to sidetrack via turnspeech and personalities in the teeth of evidence including relevant history. KF kairosfocus
KS: Nope. The FSCO/I quantitative metric model -- derived algebraically & conceptually from Dembski's 2005 metric by log reduction, recognition of a reasonable threshold and provision of a means of recognising observed functional specificity of organisation -- is in the form: Chi_500 = I*S - 500, functionally specific bits beyond the sol system threshold S is a dummy variable reflecting warrant for functional specificity, and the 500 bit threshold reflects the sol system blind search of config space limit. I is an info metric that is based on the various empirical info measurement techniques out there. Those techniques do not necessarily rely on a priori estimates of probabilities on the hyp of any and all possible states of the world that may affect probability distributions. After all, it is a commonplace to inspect the physical circumstances and se if there is reason to infer bias, or whether there is no reason to prefer any one particular outcome. E.g. with a coin or die, the physical arrangements are such that there is high contingency and there are defined outcome states. The objects are symmetrical and do not bear obvious signs of manipulation leading to the usual conclusion that a coin can store 1 bit, and a 6-sided die, 2.585 bits of info. Chains of same, of length m and n would be able to hold m * 1 and n * 2.585 bits of info. D/RNA has four states and no basic constraint on chaining, so it will be able to store 2 bits per base. Such has actually been used in coding, to express an ownership by Venter IIRC. Likewise, statistical studies are a commonplace way to explore patterns of informational systems, e.g. the frequency distribution of letters in typical English text. This points to the further phenomena of real world coding systems, that there tend to be redundancies etc. In the case of proteins, it can be seen that some AAs are more flexible than others in a chain; which makes sense on the point that some may be part of an active site cleft, but others may just be part of the folding and within reason another hydrophobic or hydrophilic AA might do. In any case, on statistical studies, we may infer empirically warranted frequencies, and thus coding or functionality constrained variations from the physically possible distribution of states. That statistically estimates probabilities per the functional state of say a protein. But then, that is no news to anyone who has had a modicum of statistical exposure in school math and has plotted a bell, reverse j or the like distribution. The Shannon H metric applies and is in the form of an average info per symbol metric linked to a weighted sum probability calculation: H = - SUM pi log pi. The same familiar from statistical thermodynamics and which can there be interpreted on average missing info to specify microstate on knowing the relevant macrostate variable values. So, from the info end, and from physical and statistical studies we may deduce probabilities etc. All of that is commonplace, well known, and uncontroversial, so I am amazed to see such a scorched earth fight against that. The point is, take the algebraic analysis and move the expressions of interest into info form, seeing that we are dealing with an info beyond a threshold metric. Then, on context of empirical situation, come up with reasonable values for threshold and reasonable ways to measure info that elates to functionally specific cases. The least familiar aspect is use of a dummy variable to define a state of the world that affects the case, but that is a commonplace in economic modelling. In this case, default is 0, and it moves to 1 on evidence that the configurations in view are functionally specific. Which is in relevant cases not too hard to spot, e.g. fairly small perturbations destroy function. Assembly on a wiring diagram that is linked to interactions to achieve function is an excellent case in point. Believe you me, you do not want to inject random variations in the wiring of an electronic circuit. Poof, you let the smoke out. The 6500 C3 reel I have been using in recent days is not notably tolerant of perturbation of components or misalignments or improper orientation. English text can tolerate some typos and grammatical variants, but real soon thinhz fshh srpgpd. [things fall apart] Computing codes, especially object code aligned to the architecture of a system, are notoriously intolerant of bugs. Indeed, IIRC NASA once lost a rocket because of a misplaced comma in some code. So, this is not strange, suspect stuff, it is well known. And -- per fair comment -- would be uncontroversial, apart from ideologisation of origins science and associated rather selective hyperskepticism. Please reconsider. KF kairosfocus
Guys: What a catch up! I will have to stop sleeping. :) Luckily, most comments were not addressed to me, so I could skip many of them. Thank you to all, friends and not, for the comments. Please. go on! :) I would humbly sponsor my brief post #444, and encourage comments on it. gpuccio
Gary S. Gaulin: I was not aware of your work. It seems interesting, but I certainly need time to study it. Thank you for sharing! gpuccio
Me_Think at #439: "The Probability of “Keith” (x less than or equal to 5) = 0.1259" So, we were just a little bit unlucky! :) gpuccio
Vishnu at #433: "How many iterations of an algorithm does it take to find (with proper fitness functions, of course)… METHINKS KEITH IS AN IDIOT" You can just state it as a Turing Oracle! :) gpuccio
fifthmonarchyman at #431: Is that some form of design detection? :) gpuccio
PaV at #423: "I hope for gpuccio’s sake we can get back on topic. We/he should be discussing dFCSI" I hope that too! And thank you, always, for your contributions. You know how I appreciate them! gpuccio
EugeneS at #390 and Phineas at #397: :) This is an important point. Thank you for your contribution! gpuccio
fifthmonarchyman at #382: Very interesting. Again, give us details and keep us updated. :) gpuccio
DNA_Jock: I would very much appreciate a comment from you to my example in post #444, regarding post-specification. gpuccio
Adapa at 374: "You guys look at one result after the fact then confuse it with a before the fact prediction and claim “ZOMG that result is too improbable it must be designed!!” You could make the same erroneous claim with anyone who won." The old wrong silly argument. Try to compare these two statements: a) A lottery which sold 10^150 tickets was won by one of the people who acquired a ticket. Post-specification: "one of the people who acquired the ticket". Probability of the event (as judged by the post-specification): 1. b) A lottery which sold 10^150 tickets was won by the brother of the functionary who presided over the extraction to check its regularity. Post-specification: "the brother of the functionary who presided over the extraction to check its regularity". Probability of the event (as judged by the post-specification): 1:10^150 (if there is only one brother :) ). Conclusions: I leave them to you (or the judge!). This is an example of design detection (and the detected design is a fraud). I would appreciate a clear and explicit answer to this from you. Thank you. gpuccio
Zachriel at #373: "We can show that such a process can find solutions to complex problems." That's true. But I would add: complex problems which have already been defined, directly or indirectly, but the programmer, and can be solved computationally by the computational powers of the algorithm and computing machine. gpuccio
Zachriel: "That’s your claim, and you may be correct; but you argue that an algorithm can’t generate a sonnet, but restrict the algorithm from having access to the same background information as Shakespeare." Not exactly. I argue that an algorithm cannot generate an original sonnet with an original meaning on anything, also if the algorithm has access to some corpus of information (let's say some encyclopedia). I am not requiring that the sonnet should be as good as Shakespeare's (OK that would be really exacting!), or even some deep and beautiful piece of poetry. Indeed, in my general argument, I did not even require that it be a sonnet, or poetry at all: just that it could have original good meaning in English and be 600 characters long. So, my requests, and my indications of what an algorithm can do as far as we know today, are really limited. gpuccio
KF: "GP, I think the problem here is that on evolutionary materialism contemplation must reduce to computation, but the deterministic mechanistic side of algors is not creative and the stochastic side is not creative enough and powerful enough to account for FSCO/I and particularly spectacular cases of dFSCI such as the sonnets in question. KF" This is a very good and concise summary of the important point here. Thank you! :) gpuccio
In case you didn't know: I have a text file showing the first 15 minutes of learning for a programmed rudimentary intelligence, the ID Lab critter. https://sites.google.com/site/intelligenceprograms/Home/Run2LobeFor15Min.Txt The contents of its memory at each thought cycle (left then right lobe has control) can be reconstructed by saving the Data listed at each line in an array, using the Address as the element number to save the data at. I'm not sure whether this will be useful to you or not, but that's what the numbers indicative of intelligence look like. What matters in regards to purpose and meaning is in the way the motors are being controlled. This brings us back to Movement Is Happiness even when we just see and/or hear the right moves. In cells molecules like motor proteins do the muscle type work, while sensory molecules Address the memory that stores actions in Data elements called "genes". The systematics are the same as for our brain. The only difference is that the intelligence controls molecular motor systems inside cells, instead of muscles that power our limbs. Intelligence might not look like much when reduced down to numbers for motor/motion control, but that's how an intelligence works. I likewise had to get used to seeing the numbers temporally. What happens in one thought cycle depends on what happened in previous cycles before it and what will somewhat predictably happen after that. Each thought cycle is usually only one step in a learned task (such as navigating to the location its attracted to) that can take many thought cycles to complete. Complex behavior is the result of proper timing of actions from a relatively simple motor control system. After adding consciousness and other intelligence levels we contain we are found to be much more than a robot but thankfully ID theory only needs to explain the basics of the "intelligent" part, not the part that causes us to be "conscious". Gary S. Gaulin
Vishnu @433
How many iterations of an algorithm does it take to find (with proper fitness functions, of course) METHINKS KEITH IS AN IDIOT
Length of words in English Language follows Binomial Distribution with n=38 and p=0.220887 So,the Probability of "Vishnu"(x>=6) = 0.8740 The Probability of "Keith" (x less than or equal to 5) = 0.1259 Hence "METHINKS VISHNU IS AN IDIOT" is more appropriate Note: My handle has an underscore so you can't use the above calculations for my handle :-) Me_Think
Whoops! There goes "argument clinic" Joe again! "there is no evolutionary theory!!" "there is no evidence for evolution!!" "there are no known evolutionary mechanisms!!" Keep telling yourself that Joe. Don't mind the laughter from the life sciences professionals. :) Adapa
wd400:
There is no more doubt that mammals descend from eariler vertebrates that all vertebrates descent from chordates or deuterostomes.
And there is a lot of doubt for both as there aren't any known mechanisms capable of producing the transformations required. Joe
Adapa, there isn't any evolutionary theory and there isn't any evidence that natural selection can do anything beyond changing allele frequency. And it isn't the only mechanism for doing that. If you had some positive evidence then the calculations would be moot. The reason you need to provide H is because you don't have the positive evidence. Joe
Vertebrates are deuterostomes Pav, and chordates and craniates for that matter. You are talking as if the "vertebrate body plan" was a thing unto itself, so Zachriel is right to point out it is in fact nested within biological diversity. There is no more doubt that mammals descend from eariler vertebrates that all vertebrates descent from chordates or deuterostomes. wd400
Joe And you cannot provide H and you blame us for your failures. Got it You guys are the ones pushing a calculation that absolutely requires H, not us. Unlike ID, evolutionary theory relies on its own positive evidence and not some bogus probability value. Adapa
How many iterations of an algorithm does it take to find (with proper fitness functions, of course)... METHINKS KEITH IS AN IDIOT Vishnu
keith s:
The FSCO/I equation requires the calculation of P(T|H), where H includes “Darwinian and other material mechanisms”, per Dembski.
And you cannot provide H and you blame us for your failures. Got it. Joe
Realty lots of adjectives check assorted words in all caps check reference to body orifice check reference to bodily fluids check looks like all the bases are covered peace fifthmonarchyman
PaV So, please, if you can, explain to us how evolution takes place. Give us the steps, show us examples. And, of course, we’ll be very interested in all the “intermediate forms” that Darwinism supposes. You say you graduated from UCLA with a Biology degree yet you managed to not learn even the most basic things about evolutionary theory. Now you want Keith S. to give you a remedial course on evolution in a few paragraphs for all the things you couldn't grasp in four years. Interesting. Adapa
kairosfocus, your blithering aimed at "R" is apparently aimed at me but my username is not "R", it's Reality. Of course the usual malicious, hypocritical, mendacious, falsely accusatory barf you spewed is aimed at me and anyone else who doesn't kiss your butt. You said: "There is such a thing as fair and reasonably justified comment..." Yup, and all of my comments about you and to you are not only fair and THOROUGHLY justified, they're also CORRECT. Reality
PaV: "Tell me, was my conclusion wrong? Yes, or no." I already told you. It was fallacious. Logic not your strong point. Your position appears to be that I was a statistician familiar with the weaknesses of Dembski's method, but thanks to the phrasing I used, you knew I was not familiar with his writings. For some strange reason, you thought I was criticizing Dembski specifically. Hence the logic fail. "Because, believe it or not, you’re not the first statistician whose appeared at UD". Nor the last. Nor even a statistician. What are they teaching kids at UCLA these days? DNA_Jock
wd400: I'm waiting for DNA_Jock to answer me first. And, BTW, this is from BA77's link to "terradaily": "The paper is relevant to the big question of what fueled the Cambrian radiation, and why that event was so singular," said UC-Riverside's Hughes of Webster's study. It appears that organisms displayed "rampant" within-species variation "in the 'warm afterglow' of the Cambrian explosion," Hughes said, but not later. "No one has shown this convincingly before, and that's why this is so important." The variation was there from the beginning. Evolution didn't put the variation there first. (The quote refers to a paper that appeared in Science magazine about 7 years ago) The biggest weight hanging from the neck of Darwinism, not surprisingly, is the Fossil Record. Darwin knew from the beginning that the Fossil Record did not favor his theory. We now know this to be even more true. PaV
Zachriel: I'm so happy we have such smart people like you here. It really helps. This article at ENV tells us:
A new article in The Scientist, "Clocks Versus Rocks," reports a contradiction between the fossil record and the molecular data as regards the origin of placental mammals. The problem is that, as a fossil-based study led by Maureen O'Leary found last year, "placental mammal diversity exploded" starting around 65 million years ago, but as The Scientist now puts it, "Genetic studies that compare the DNA of living placentals suggest that our last common ancestor lived between 88 million and 117 million years ago, when the dinosaurs still ruled." So we have a conflict: fossils show the abrupt explosion of many modern mammal groups starting around 65 million years ago. However, living members of those groups are so genetically different that "molecular clock" studies suggest their origins must be deep into the Mesozoic, during the age of the dinosaurs. Which dataset are we to trust?
It is stupendous that you would talk about "deuterostomes" when I was specifically talking about vertebrates. I suppose it is your thought that the Cambrian vertebrate species arose originally from these "deuterostomes." But, of course, evidence, and not conjecture, are needed. PaV
Still waiting for the devestating revelation about the fundamental theorem of NS, too... wd400
But that’s not what we see in the Cambrian Explosion. We see almost all of major phyla/body-plans arise quickly (geologically speaking) and THEN diversify. We talk about fish and birds and reptiles and dinosaurs and mammals, but we’re really talking about “vertebrates.” The body plan was there from the beginning.
It would be a special lineage that diversified before it existed. There are also plenty of non-vertebrate chordates that share much of our body plan but aren't verebrates, and indeed non chordate deuterostomes that share our very early embryology, so I'm not sure which body plan was there at the start. wd400
BA77: Thanks for those great quotes. I hope for gpuccio's sake we can get back on topic. We/he should be discussing dFCSI PaV
D_J:
PaV wrote: BTW, I didn’t say that my statements “refuted” what you had written, did I! Well, DNA_J, did I say that? I was merely pointing out that that is NOT how Dembski’s method works. What I said was that it was clear that you hadn’t read Dembski’s book. And you have made it clear that I was right in reaching that conclusion. I then WENT ON to tell you the two things that are needed for Dembski’s method to apply.
DNA_Jock responds: Thank you for confirming your total logic fail. Those two statements did not represent any misunderstanding of Dembski’s method. I never suggested that they were a refutation thereof.
Wow. Have you flipped out? Again, did I say that what you wrote was a "misunderstanding" of what Dembski has written, or his thought on the subject? Of course not. It was quite evident that you were unfamiliar with his writings or you would have phrased things differently. Then I "supplied" you with some of the critical elements of his method. I fully expected that you would react the way that you did. Why? Because, believe it or not, you're not the first statistician whose appeared at UD. And we know where the weaknesses of Dembski's method lies. But, again, they are weaknesses, and not, what, in any other sector of science, would be invalidating. These "weaknesses" are why such a thing as dFSCI is being discussed here.
And they remain true. But you said “From this statement, I would conclude”
Tell me, was my conclusion wrong? Yes, or no. PaV
a few notes as to the 'top down' perspective: The Cambrian's Many Forms Excerpt: "It appears that organisms displayed “rampant” within-species variation “in the ‘warm afterglow’ of the Cambrian explosion,” Hughes said, but not later. “No one has shown this convincingly before, and that’s why this is so important.""From an evolutionary perspective, the more variable a species is, the more raw material natural selection has to operate on,"....(Yet Surprisingly)...."There's hardly any variation in the post-Cambrian," he said. "Even the presence or absence or the kind of ornamentation on the head shield varies within these Cambrian trilobites and doesn't vary in the post-Cambrian trilobites." University of Chicago paleontologist Mark Webster; article on the "surprising and unexplained" loss of variation and diversity for trilobites over the 270 million year time span that trilobites were found in the fossil record, prior to their total extinction from the fossil record about 250 million years ago. http://www.terradaily.com/reports/The_Cambrian_Many_Forms_999.html Dollo's law and the death and resurrection of genes: Excerpt: "As the history of animal life was traced in the fossil record during the 19th century, it was observed that once an anatomical feature was lost in the course of evolution it never staged a return. This observation became canonized as Dollo's law, after its propounder, and is taken as a general statement that evolution is irreversible." http://www.pnas.org/content/91/25/12283.full.pdf+html A general rule of thumb for the 'Deterioration/Genetic Entropy' of Dollo's Law as it applies to the fossil record is found here: Dollo's law and the death and resurrection of genes ABSTRACT: Dollo's law, the concept that evolution is not substantively reversible, implies that the degradation of genetic information is sufficiently fast that genes or developmental pathways released from selective pressure will rapidly become nonfunctional. Using empirical data to assess the rate of loss of coding information in genes for proteins with varying degrees of tolerance to mutational change, we show that, in fact, there is a significant probability over evolutionary time scales of 0.5-6 million years for successful reactivation of silenced genes or "lost" developmental programs. Conversely, the reactivation of long (>10 million years)-unexpressed genes and dormant developmental pathways is not possible unless function is maintained by other selective constraints; http://www.pnas.org/content/91/25/12283.full.pdf+html Dollo's Law was further verified to the molecular level here: Dollo’s law, the symmetry of time, and the edge of evolution - Michael Behe Excerpt: We predict that future investigations, like ours, will support a molecular version of Dollo's law:,,, Dr. Behe comments on the finding of the study, "The old, organismal, time-asymmetric Dollo’s law supposedly blocked off just the past to Darwinian processes, for arbitrary reasons. A Dollo’s law in the molecular sense of Bridgham et al (2009), however, is time-symmetric. A time-symmetric law will substantially block both the past and the future. http://www.evolutionnews.org/2009/10/dollos_law_the_symmetry_of_tim.html Evolutionary Adaptations Can Be Reversed, but Rarely - May 2011 Excerpt: They found that a very small percentage of evolutionary adaptations in a drug-resistance gene can be reversed, but only if the adaptations involve fewer than four discrete genetic mutations. (If reverting to a previous function, which is advantageous, is so constrained, what does this say about gaining a completely novel function, which may be advantageous, which requires many more mutations?) http://www.sciencedaily.com/releases/2011/05/110511162538.htm From Thornton's Lab, More Strong Experimental Support for a Limit to Darwinian Evolution - Michael Behe - June 23, 2014 Excerpt: In prior comments on Thornton's work I proposed something I dubbed a "Time-Symmetric Dollo's Law" (TSDL).3, 8 Briefly that means, because natural selection hones a protein to its present job (not to some putative future or past function), it will be very difficult to change a protein's current function to another one by random mutation plus natural selection. But there was an unexamined factor that might have complicated Thornton's work and called the TSDL into question. What if there were a great many potential neutral mutations that could have led to the second protein? The modern protein that occurs in land vertebrates has very particular neutral changes that allowed it to acquire its present function, but perhaps that was an historical accident. Perhaps any of a large number of evolutionary alterations could have done the same job, and the particular changes that occurred historically weren't all that special. That's the question Thornton's group examined in their current paper. Using clever experimental techniques they tested thousands of possible alternative mutations. The bottom line is that none of them could take the place of the actual, historical, neutral mutations. The paper's conclusion is that, of the very large number of paths that random evolution could have taken, at best only extremely rare ones could lead to the functional modern protein. http://www.evolutionnews.org/2014/06/more_strong_exp087061.html Some Further Research On Dollo's Law - Wolf-Ekkehard Lonnig - November 2010 http://www.globalsciencebooks.info/JournalsSup/images/Sample/FOB_4(SI1)1-21o.pdf A. L. Hughes's New Non-Darwinian Mechanism of Adaption Was Discovered and Published in Detail by an ID Geneticist 25 Years Ago - Wolf-Ekkehard Lönnig - December 2011 Excerpt: The original species had a greater genetic potential to adapt to all possible environments. In the course of time this broad capacity for adaptation has been steadily reduced in the respective habitats by the accumulation of slightly deleterious alleles (as well as total losses of genetic functions redundant for a habitat), with the exception, of course, of that part which was necessary for coping with a species' particular environment....By mutative reduction of the genetic potential, modifications became "heritable". -- As strange as it may at first sound, however, this has nothing to do with the inheritance of acquired characteristics. For the characteristics were not acquired evolutionarily, but existed from the very beginning due to the greater adaptability. In many species only the genetic functions necessary for coping with the corresponding environment have been preserved from this adaptability potential. The "remainder" has been lost by mutations (accumulation of slightly disadvantageous alleles) -- in the formation of secondary species. http://www.evolutionnews.org/2011/12/a_l_hughess_new053881.html Verse: Genesis 1:25 God made the wild animals according to their kinds, the livestock according to their kinds, and all the creatures that move along the ground according to their kinds. And God saw that it was good. bornagain77
zac said What does it profit to change it to numbers? I say, It removes the string from it's context. For all the programer knows it's a representation of a protein or of fluctuation in the temperature of a heat source You say, It would certainly confuse Shakespeare. I say, That is the point when you separate the string from the context PS FYI I feel my frustration level increasing time to take a break peace fifthmonarchyman
PaV: However, per Darwin, the characteristics that are used to classify the “class” would have developed over long stretches of time, instead of being there from the beginning. Yes, changes accumulate in each lineage. PaV: Mammals evolved, but the fundamental characteristics of what a mammal is appeared suddenly. Not necessarily. For instance, mammaries don't fossilize, but even simple secretions can help nourish and protect the young, then evolve over time due to reinforcing selection. If we look at the middle ear, another mammalian characteristic, there are some excellent fossils showing the transition. PaV: Yes, outwardly they change in many different ways, but it’s always the same body-plan. Sure. Humans are just modified deuterostomes, a tube with appendages to stuff food into one end. Microevolution! PaV: We see almost all of major phyla/body-plans arise quickly (geologically speaking) and THEN diversify. Sure. Humans are modified deuterostomes. Nothing much has changed since the Cambrian. (It's called adaptive radiation.) fifthmonarchyman: The first string is just a numerical representation of the sonnet. So? What does that do? The algorithm (or Shakespeare) has a dictionary, knowledge of grammar, scansion, poetic structure, the relationship of words, the catalog of poetry. The question is whether the algorithm can create a sonnet. What does it profit to change it to numbers? It would certainly confuse Shakespeare. Zachriel
Zac said, You mean the encoding is secret or something? So, do you think Shakespeare could do it? I say, The first string is just a numerical representation of the sonnet. Yes, Shakespeare did do it that much is assumed. peace fifthmonarchyman
"Well keith, Kf thinks he is calculating a p(T|H) using Durston’s fits data" Can I also point out that Durston's "fits" aren't exactly a well-accepted parameter in biochemistry. I think 5 or so papers site that work. Split between self-citation and one other group. Not exactly taking science by storm. And defining the "fits" based on conservation of sequences (selected by a specification) in living organisms and then calling that the target space for improved fitness in evolution is....problematic. It suffers all the issues I've outlined above. Not to say these approaches in quantifying bits/amino acid aren't useful. We make consensus proteins from the most highly conserved amino acids in a given domain. Nice, stable scaffolds result. REC
PaV,
Yes, Dembski has since been criticized because in many situations the actual mechanism, and the probability distribution associated with it, can be hard to know. This is a weakness.
It's a fatal weakness, because no one can calculate P(T|H) for a biological phenomenon. Look how kairosfocus is squirming to avoid the question. He knows he can't do the calculation, but he is ashamed to admit it. What's even worse is that even if Dembski (or KF) could calculate P(T|H), that wouldn't make CSI a useful concept. Here's why: You have to know that P(T|H) is low in order to attribute CSI to it. But if you already know that P(T|H) is low, then you don't need the CSI concept at all, because you've already determined that the phenomenon in question could not have evolved. It's circular: 1. Determine that something could not have evolved. 2. Assign CSI to it. 3. Conclude that it could not have evolved because it has CSI. It's amazing to me that ID proponents don't see the problem. At least Dembski was smart enough to dump CSI and move on to his "search for a search" stuff with Marks. keith s
PaV, My mocking style? Motes and beams, mate. "Go away little girl." Thank you for confirming your total logic fail. Those two statements did not represent any misunderstanding of Dembski's method. I never suggested that they were a refutation thereof. And they remain true. But you said "From this statement, I would conclude" I did, in a passage that you did not quote, allude to the difficulty in specifying a pattern "just ask a statistician" I quipped. Thank you too for confirming that you are unwilling to actually discuss the thesis of his book, that NFL theory can be applied to "evolutionary search". I note that Kairosfocus still hasn't calculated p(T|H). DNA_Jock
wd400:
Phyla (and other lineages) arise by speciation followed by divergence. That’s what Darwin was saying, that’s what modern evolutionary biology has shown.
But that's not what we see in the Cambrian Explosion. We see almost all of major phyla/body-plans arise quickly (geologically speaking) and THEN diversify. We talk about fish and birds and reptiles and dinosaurs and mammals, but we're really talking about "vertebrates." The body plan was there from the beginning. PaV
Zachriel:
By the same process of diversification from a common ancestor. A class is just a successful lineage that has diversified over a long period of time.
However, per Darwin, the characteristics that are used to classify the "class" would have developed over long stretches of time, instead of being there from the beginning. Mammals evolved, but the fundamental characteristics of what a mammal is appeared suddenly. This contradicts his taxonomic relativism. Yes, outwardly they change in many different ways, but it's always the same body-plan. That's how I see it. And I think that's how Meyer's sees it. I'm rather comfortable with his latest book. PaV
PaV at 380 comments:
In my life there have only been three books I’ve thrown down in disgust. The first was “Origin of Species” when Darwin dares to say that “species give rise to genera, genera to families, families to orders, and orders to classes.” (from memory) This is just silliness. Why? Because he has no justification whatsoever for stopping at “classes”! If you can’t get ‘higher’ than a “class,” then how do you get a “phylum”? So, where do the phyla come from? Are they there from the beginning? If so, how did they form? Well, of course, Darwin thinks he’s off the hotseat because at the end he says: “There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one . . .” Please explain who is doing this “breathing.” Darwin doesn’t. And then, so many editions later—when no one is watching, he drops the phrase.
To which wd400 at 385 retorts,,,
He had a pretty good reason — there were no Phyla is the classification used at the time. I guess he could’ve gone to Kingdom, but don’t quite see why you’d throw a book aside for not reaching the end of a series of (names).
Which is, given the fact that wd400 is intelligent, to purposely miss the point that PaV was making. The point PaV was making is that the highest rankings in classification is suppose to, in Darwin's 'bottom up' scheme of things, not be reached until a long slow process of gradual accumulation of changes. But the biological classification scheme itself presupposes a 'top down' structure that is opposite of what Darwin claimed. Darwin's claim again is as such:
“species give rise to genera, genera to families, families to orders, and orders to classes.” (from memory)
Yet the actual hierarchy of biological classification itself is as such:
Life, Domain, Kingdom, Phylum, Class, Order, Family, Genus, Species http://upload.wikimedia.org/wikipedia/commons/a/a5/Biological_classification_L_Pengo_vflip.svg
As they use to ask on Sesame Street when I was growing up, can you tell what does not belong in this picture? In Darwin's 'bottom up' scheme species were first. Yet in the actual classification species are last! Moreover, the 'top down' pattern, in which species appear last, which is completely antithetical to Darwin's 'bottom up' scenario, is, more or less, what we actually observe in the fossil record.
The Ham-Nye Creation Debate: A Huge Missed Opportunity - Casey Luskin - February 4, 2014 Excerpt: "The record of the first appearance of living phyla, classes, and orders can best be described in Wright's (1) term as 'from the top down'." (James W. Valentine, "Late Precambrian bilaterians: Grades and clades," Proceedings of the National Academy of Sciences USA, 91: 6751-6757 (July 1994).) http://www.evolutionnews.org/2014/02/the_ham-nye_deb081911.html Investigating Evolution: The Cambrian Explosion Part 1 – (4:45 minute mark - upside-down fossil record) video http://www.youtube.com/watch?v=4DkbmuRhXRY Part 2 – video http://www.youtube.com/watch?v=iZFM48XIXnk Chinese microscopic fossil find challenges Darwin's theory - 11 November, 2014 Excerpt: One of the world's leading researchers on the Cambria explosion is Chen Junyuan from the Nanjing Institute of Palaeontology and he said that his fossil discoveries in China show that "Darwin's tree is a reverse cone shape". A senior research fellow at Chengjiang Fauna [fossil site], said, "I do not believe the animals developed gradually from the bottom up, I think they suddenly appeared". http://www.scmp.com/comment/letters/article/1636922/chinese-microscopic-fossil-find-challenges-darwins-theory “Darwin had a lot of trouble with the fossil record because if you look at the record of phyla in the rocks as fossils why when they first appear we already see them all. The phyla are fully formed. It’s as if the phyla were created first and they were modified into classes and we see that the number of classes peak later than the number of phyla and the number of orders peak later than that. So it’s kind of a top down succession, you start with this basic body plans, the phyla, and you diversify them into classes, the major sub-divisions of the phyla, and these into orders and so on. So the fossil record is kind of backwards from what you would expect from in that sense from what you would expect from Darwin’s ideas." James W. Valentine - as quoted from "On the Origin of Phyla: Interviews with James W. Valentine" The unscientific hegemony of uniformitarianism - David Tyler - May 2011 Excerpt: The pervasive pattern of natural history: disparity precedes diversity,,,, The summary of results for phyla is as follows. The pattern reinforces earlier research that concluded the Explosion is not an artefact of sampling. Much the same finding applies to the appearance of classes. These data are presented in Figures 1 and 2 in the paper. http://www.arn.org/blogs/index.php/literature/2011/05/16/the_unscientific_hegemony_of_uniformitar
Moreover, disparity (large differences) preceding diversity (small differences) is not only found in the Cambrian Explosion but is found after it as well. In fact it is a defining characteristic of the overall fossil record.
Scientific study turns understanding about evolution on its head - July 30, 2013 Excerpt: evolutionary biologists,,, looked at nearly one hundred fossil groups to test the notion that it takes groups of animals many millions of years to reach their maximum diversity of form. Contrary to popular belief, not all animal groups continued to evolve fundamentally new morphologies through time. The majority actually achieved their greatest diversity of form (disparity) relatively early in their histories. ,,,Dr Matthew Wills said: "This pattern, known as 'early high disparity', turns the traditional V-shaped cone model of evolution on its head. What is equally surprising in our findings is that groups of animals are likely to show early-high disparity regardless of when they originated over the last half a billion years. This isn't a phenomenon particularly associated with the first radiation of animals (in the Cambrian Explosion), or periods in the immediate wake of mass extinctions.",,, Author Martin Hughes, continued: "Our work implies that there must be constraints on the range of forms within animal groups, and that these limits are often hit relatively early on. Co-author Dr Sylvain Gerber, added: "A key question now is what prevents groups from generating fundamentally new forms later on in their evolution.,,, http://phys.org/news/2013-07-scientific-evolution.html “It is a feature of the known fossil record that most taxa appear abruptly. They are not, as a rule, led up to by a sequence of almost imperceptibly changing forerunners such as Darwin believed should be usual in evolution…This phenomenon becomes more universal and more intense as the hierarchy of categories is ascended. Gaps among known species are sporadic and often small. Gaps among known orders, classes and phyla are systematic and almost always large.” G.G.Simpson – one of the most influential American Paleontologist of the 20th century “Given the fact of evolution, one would expect the fossils to document a gradual steady change from ancestral forms to the descendants. But this is not what the paleontologist finds. Instead, he or she finds gaps in just about every phyletic series.” – Ernst Mayr-Professor Emeritus, Museum of Comparative Zoology at Harvard University “What is missing are the many intermediate forms hypothesized by Darwin, and the continual divergence of major lineages into the morphospace between distinct adaptive types.” Robert L Carroll (born 1938) – vertebrate paleontologist who specialises in Paleozoic and Mesozoic amphibians “In virtually all cases a new taxon appears for the first time in the fossil record with most definitive features already present, and practically no known stem-group forms.” Fossils and Evolution, TS Kemp – Curator of Zoological Collections, Oxford University, Oxford Uni Press, p246, 1999
What Darwin predicted should be familiar to everyone and is easily represented in the following 'tree' graph.,,,
The Theory - Diversity precedes Disparity - graph http://www.veritas-ucsb.org/JOURNEY/IMAGES/F.gif
But that 'tree pattern' that Darwin predicted is not what is found in the fossil record. The fossil record reveals that disparity (the greatest differences) precedes diversity (the smaller differences), which is the exact opposite pattern for what Darwin's theory predicted.
The Actual Fossil Evidence- Disparity precedes Diversity - graph http://www.veritas-ucsb.org/JOURNEY/IMAGES/G.gif
bornagain77
KF, You claim that you can identify instances of design by calculating FSCO/I. The FSCO/I equation requires the calculation of P(T|H), where H includes "Darwinian and other material mechanisms", per Dembski. If you can't calculate P(T|H), you can't calculate FSCO/I. keith s
DNA_Jock:
So I haven’t read Dembski’s NFL. But I am curious. Maybe there is something else in the book that refutes one or both of my two statements above, which would thus restore your logic. Do tell. Could you also please describe to me how he applies the No Free Lunch Theorem to biological “Search”. In order to not be disingenuous, I should warn you that this latter request is a trap.
Why don't you buy the book, or check it out from a library, and read for yourself? BTW, I didn't say that my statements "refuted" what you had written, did I! Well, DNA_J, did I say that? I was merely pointing out that that is NOT how Dembski's method works. What I said was that it was clear that you hadn't read Dembski's book. And you have made it clear that I was right in reaching that conclusion. I then WENT ON to tell you the two things that are needed for Dembski's method to apply. Yes, Dembski has since been criticized because in many situations the actual mechanism, and the probability distribution associated with it, can be hard to know. This is a weakness. But I don't think it invalidates his method; it only demonstrates its limitation. When it comes to biological entities, IIRC, Dembski uses, or assumes, a uniform distribution over the bases in making his calculations. If you want to be picayune, yes, indeed, it is not such a distribution. But all of science relies on approximations. Nothing is exact. The mathematics are to complicated to do without them. And, they're all over the place. It's only ID that is raked over the coals about these kinds of things. PaV
fifthmonarchyman: Agreed but the programer is now free to use the same background information as well as any other information he can think of as long as it does not come from the original string. You mean the encoding is secret or something? So, do you think Shakespeare could do it? Zachriel
Zac said, That doesn’t remove the necessity of background knowledge. The original sequence is just the original sequence encoded. I say, Agreed but the programer is now free to use the same background information as well as any other information he can think of as long as it does not come from the original string. In theory he can access all the CSI in the universe except that which is original in the designer of the original string. peace fifthmonarchyman
PS: Thermo-D would point to spontaneous breakdown, for excellent reasons. There is a reason why protein assembly in the cell uses such a specifically constraining step by step numerically controlled complex assembling system in the Ribosome. The unconstrained trend would not go there. kairosfocus
BTW: It is a commonplace in modelling and math work as well as experimental sciences to use transformations of quantities and variables to make them amenable for further work. I discussed above on Laplace transforms and Z transforms. A common simple case is use of log-log and log-linear graph paper. Some would add plotting graphs too. And, algebraic and calculus manipulation of variables while reserving statistics and calcs for a later stage is part of good praxis, not least as it gets out of error propagation problems. But in this case the dominant reason is that moving to the logged out expression reveals more plainly what it is doing, extracting a measure of info beyond a threshold. Info being more directly accessible empirically. And, the relevant chancy hyps inn play at OOL have to do with thermodynamics and chemical kinetics that are long since discussed. At onward cases, design is already sitting at the table and the pattern of protein folds in AA space and distributions of AAs in proteins, already strongly point to islands of function not reasonably accessible to sparse search constrained by available time and atoms. The hoped for grand continent of incrementally improving function across a broad tree of life severely lacks empirical warrant. So it is uite reasonable to apply reasonable chance hyps that may be a bit biased but not too much so and not in ways correlated to finding THOUSANDS of islands of function in AA sequence config possibilities space. KF kairosfocus
DNA_Jock: Did none of this tip you off?
I will now present PROOF that the genome is, indeed, uniformly distributed across genome space!!!!! Trumpets, please!!! Drum roll!!!!
PaV
PaV: How do you get to a “class” or a “phylum”? By the same process of diversification from a common ancestor. A class is just a successful lineage that has diversified over a long period of time. Zachriel
PaV, I've no idea what you talking abut with regard taxonomy. What does this mean?
But, using Darwin’s methodology and thinking, would mean that the only way that you can arrive at a “class/phylum” level would be after a greater period of “diversification,” which would place the “phylum” at the “top” of the nested hierarchy in terms of geological time.
Phyla (and other lineages) arise by speciation followed by divergence. That's what Darwin was saying, that's what modern evolutionary biology has shown. What do you think the fundamental thereon of natural selection is based on -- you seem to be dying to shock us with this revelation... wd400
DNA_Jock: You've shown your true colors with your mocking style. So we know what kind of character you are. Look, you should be smart enough to realize that what I wrote as being a "proof" is nothing like a "proof." That is so obvious that you should have been looking for something else. Your response is exactly what I was getting at:
Note that I was pointing out that although gpuccio’s assumption re uniform p was wrong, I did not raise the issue to “somehow undermine ID”. Rather I allowed that his assumption was okay for practical purposes (in the particular context that we were discussing).
No, it's not an i.i.d. But it's almost like that. Yes, like CpG's and other such instances, we know that transitions/transversions aren't uniform. But, overall, given the entirety of the genome and what we see bases doing, it's a close approximation. And, that's the point. Population guys don't bother with it because it usually doesn't make a difference. PaV
PaV @ 376
DNA_Jock:
…you have to be really, really, really cautious if you are applying a post-facto specification to an event that you have already observed, and then trying to calculate how unlikely that specific event was. You can make the probability arbitrarily small by making the specification arbitrarily precise.
From this statement, I would conclude you haven’t read Dembski’s NFL book. What is needed are two things: (1) recognition of the pattern, and (2) knowledge of the mechanism by which the pattern is formed—IOW, you have to be able to calculate the probability of the “pattern” happening by ‘chance’ given the mechanism utilized in developing the “pattern.”
Here’s the fun thing about logic, PaV. You can arrive at a factually correct conclusion via faulty logic. There is no contradiction between what I said, and your paraphrase of Dembski’s point. Although if Dembski really said that you need to know the mechanism by which the pattern was formed, it sounds like a death knell for ID. So I haven’t read Dembski’s NFL. But I am curious. Maybe there is something else in the book that refutes one or both of my two statements above, which would thus restore your logic. Do tell. Could you also please describe to me how he applies the No Free Lunch Theorem to biological “Search”. In order to not be disingenuous, I should warn you that this latter request is a trap. DNA_Jock
DJ, KS et al, it seems that we are back to, if you calculate and show, we deny. If you point out the exponentially more difficult nature of a search for a golden search that is blindly discovered and magically outperforms reasonable random searches that could reasonably produce observed diversity in proteins, that is not even noticed. If you point out why the calc is not needed as the information metric rooted in observable state and/or statistics of observable variation of proteins, we dismiss or ignore. If you show the root challenge from OOL forward, we are not interested. Such, sadly speaks for itself. KF kairosfocus
R: Perhaps, you would be well-advised to ponder a couple of def'ns from dictionaries before the current wave of evolutionary materialist scientism (a fair comment description of an ideology and associated school of thought, description not namecalling . . . cf below):
science: a branch of knowledge conducted on objective principles involving the systematized observation of and experiment with phenomena, esp. concerned with the material and functions of the physical universe. [Concise Oxford, 1990 -- and yes, they used the "z" Virginia!] scientific method: principles and procedures for the systematic pursuit of knowledge [”the body of truth, information and principles acquired by mankind”] involving the recognition and formulation of a problem, the collection of data through observation and experiment, and the formulation and testing of hypotheses. [Webster's 7th Collegiate, 1965]
Contrast this, from only a decade after the 1990 OED defined science as above and at about the time of notorious tactics used by the same National Science Teachers Association in Kansas, which comes from a board level discussion:
Although no single universal step-by-step scientific method captures the complexity of doing science, a number of shared values and perspectives characterize a scientific approach to understanding nature. Among these are a demand for naturalistic explanations supported by empirical evidence that are, at least in principle, testable against the natural world. Other shared elements include observations, rational argument, inference, skepticism, peer review and replicability of work . . . . Science, by definition, is limited to naturalistic methods and explanations and, as such, is precluded from using supernatural elements [--> a strawman laced with implicit hostilities, the issue has been that here are reasonable and tested reliable signs that point to ART not blind chance and mechanical necessity as best causal explanation for certain things in the natural world, and that has been what has been on the table since Plato in The Laws Bk X 360 BC] in the production of scientific knowledge. [NSTA, Board of Directors, July 2000. Emphases added.]
In short, ideological imposition on the longstanding historically rooted definitions that shaped how major dictionaries reported on what science and its methods were in the 10 - 40 years before the US NSTA tried to define how teachers should teach students about science. But that reflects the wider issue that Harvard Biologist Lewontin reported as a member of the scientific elites:
. . . to put a correct view of the universe into people's heads we must first get an incorrect view out . . . the problem is to get them to reject irrational and supernatural [--> notice again] explanations of the world, the demons [--> notice loaded word, echoing Sagan in the book being reviewed] that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth [--> Already, we see the ideology of scientism defined in a nutshell, a priori evolutionary materialism will follow, it being well known that evolutionary materialist scientism uses presumed powers of evolutions to account for the observed cosmos from hydrogen to humans. NB: the claim advanced is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting]. . . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [--> another major begging of the question . . . multiplied by the imposition of a claimed monopoly of "Science" on begetting truth. Thus, evolutionary materialist scientism, which imposes materialistic conclusions before facts can speak . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [--> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door. [“Billions and Billions of Demons,” NYRB, January 9, 1997. If you imagine this is "quote mining" kindly cf the linked more extended, annotated cite.]
No wonder Philip Johnson replied:
For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose." . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
There is such a thing as fair and reasonably justified comment, R, and the history above (and much more) shows what has been going wrong. There really has been an evolutionary materialist magisterium that gained boldness in the post Sputnik years, and in recent decades has sought to impose a fairly radical a priori ideology on both science and science education. And so forth, but I will not further go into side issues and personalities in this thread. Your turnabout attempt fails. KF kairosfocus
wd400:
He had a pretty good reason — there were no Phyla is the classification used at the time. I guess he could’ve gone to Kingdom, but don’t quite see why you’d throw a book aside for not reaching the end of a series of anmes.
Because it's illogical. The only way that this makes any kind of sense is if you assume a few things. Darwin assumed that the earth was quasi-eternal, influenced by Dutton at Edinburgh. We know he was wrong about that. The second assumption is that along this quasi-eternal time line, species simply morph one into the other, so that what, at one point in time (one manifestation of his branching diagram), is a "species" becomes over some long time interval a "genus," only to then, over the next time frame, become part of a "family," and then an "order" and then BACK to being a "species," now that all sorts of its siblings have died off, and it's ready for more diversification of "character." (The notion of extinction is absolutely necessary for his view) Darwin sees this as almost endless. It's the "special theory of relativity" applied to taxonomy. While in 1859 you might countenance such a supposition, from the 21st century this looks like rubbish. Hence the book was thrown down in disgust. Here's Darwin himself: I see no reason to limit the process of modification, as now explained, to the formation of genera alone. If, in our diagram, we suppose the amount of change represented by each successive group of diverging dotted lines to be very great, the forms marked a14 to p14, those marked b14 and f14, and those marked o14 to m14, will form three very distinct genera. We shall also have two very distinct genera descended from (I); and as these latter two genera, both from continued divergence of character and from inheritance from a different parent, will differ widely from the three genera descended from (A), the two little groups of genera will form two distinct families, or even orders, according to the amount of divergent modification supposed to be represented in the diagram. And the two new families, or orders, will have descended from two species of the original genus; and these two species are supposed to have descended from one species of a still more ancient and unknown genus. How do you get to a "class" or a "phylum"? This is the problem. Why? Because when species are arranged, they're arranged into a heirarchy of either 'clades' or defining characteristics of what are assumed to be related species. The "class/phylum" would contain the entirety of all demarcated characteristics, which would be sub-divided into "orders", which are subdivided, etc. Each division will include a smaller amount of characteristics than found in the grouping above. But, using Darwin's methodology and thinking, would mean that the only way that you can arrive at a "class/phylum" level would be after a greater period of "diversification," which would place the "phylum" at the "top" of the nested hierarchy in terms of geological time. But the fossil record is just the opposite. And we know it. And I knew it. And so the book got tossed. This is exactly Meyer's argument in his Darwin's Doubt.
It’s based on some observations about how fitness changes in a population relative to genetic diversity.
That's what it tries to describe. That's not what it's based on. PaV
5th:
Another interesting thing is that the algorithm just keeps pluggin along for eternity. Only a non computable conscious agent has the ability to halt the program and discover that anything whatsoever of interest has been produced at all.
Yep. Endless numbers of monkeys, furiously typing away, Might make something worthy of Shakespeare one very fortunate day. But which of those studious simians will then stand up and say, "By Jove, this is quite good? It would make a fine play!" Phinehas
Well keith, Kf thinks he is calculating a p(T|H) using Durston's fits data, but Dembski would not approve, were he here. Durston's fit measures the average reduction in uncertainty associated with a residue, where the target is the sequence itself, plus its immediate neighbors of ~equal fitness. Of course the target should be ALL sequences with equal or greater fitness. And Durston's H is a random independent draw from the entire sequence space, which is so far removed from Dembski's "appropriate chance hypothesis" as to be laughable. So, still no calculation of p(T|H) for any biological. Ever. DNA_Jock
PaV @ 377 wrote:
I have a degree in biology from UCLA.
You might want to consider asking for your money back. PaV @ 376 wrote:
In the over ten years that UD has been around, no one has said that the mutation rate of an organism’s genome is NOT uniformly distributed. When population geneticists do their calculations here, they assume that the mutation rate is free to occur throughout the entire range of the genome.
You clearly haven’t been paying attention.
DNA_Jock @ 243, November 1st, on the elephant thread: I would use the word stochastic; I agree that modeling the individual transitions as uniform p is okay for practical purposes, although you might want to distinguish transitions from transversions.
Note that I was pointing out that although gpuccio’s assumption re uniform p was wrong, I did not raise the issue to “somehow undermine ID”. Rather I allowed that his assumption was okay for practical purposes (in the particular context that we were discussing). So your “there’s a tendency to be disingenuous” insult misses the mark. If I were kf, I would demand a retraction. Heh. Now I’ll offer you a little leeway, in that the transition/transversion distinction doesn’t affect the probability that nucleotide #234,123 will mutate, it merely biases the possible outcomes. However, you are still hopelessly wrong, since the probability that CpG will mutate is higher than for any other dinucleotide. With your degree in biology from UCLA, you should know this. Now that I think about it, given your demonstrated inattention to detail, perhaps UCLA was not at fault here. Rather than you asking them for your money back, perhaps they should be asking you for your diploma back. I promise to deal with your convoluted logic re Dembski’s NFL just as soon as I have stopped laughing. DNA_Jock
As predicted, KF is dodging the question. He cannot calculate P(T|H) for a biological phenomenon, and he knows it. keith s
This is a bit off-topic, but you can look at adaptations in cetaceans for some pretty obvious intermediates.
Intermediate in form doesn't mean intermediate in terms of evolutionary relationships. Joe
KS: With all due respect, you are off on a side track to the material question. A red herring led off to a strawman. I already pointed out that we can simply move forward through a log reduction analytically then have an info metric. That allows us to being to bear empirical observations on info based on state observation and statistics, which can allow us to go back through inverse logs, if that is what you want. Go get you Durston's result from 2007 and a calculator with ability to do 2^n. Where info values are - log probabilities. Just to show the point by making a calc, use: Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond log2(1/p) = 1285, so 2^(1285) = 1/p 2^1285 ~ 6 *10^386. p ~ 1.5 *10^-387 Not likely by any reasonable chance hyp. Of course, the debate is really what are the possible imagined hyps that are relevant, to which the answer is, first search for golden search that breaks the odds is a search in the power set of the original set, where for w ~ 10^386 possibilities we are dealing with a golden search space of 2^(6*10^386) possibilities, calculator smoking territory. No reasonably likely chance based variation that has to walk across AA config space to find the domain in which C-S lies, constrained by sparse search, is reasonably feasible, and the pattern of AA sequence space is such that there are not going to credibly be easy stepping stones of short Hamming distance apart; that is, hoped for Weasel like cumulative steps (ignoring for the moment Weasel's targetting and reward of non-function) are not credible. And in fact we should realise that 3 of 64 possible randomly chosen codons are stop codons. What is a short random step away is a STOP. Which is probably a built-in backup failsafe. That is, the usual out of calling on incremental success to climb the fitness hill, or exaptation of proteins doing something else etc etc, do not look very feasible. No reasonable chance hyp is likely to deliver a search for a golden search. As already pointed out. But that is not the root problem, the real problem is that we deal with only sparse possible search of very large config spaces with deeply isolated islands of function. We already know from sampling theory that in such cases the odds of hitting on islands of function are negligibly different from zero in times and scopes or atomic matter relevant to the sol system or the observed cosmos. For just 500 bits, the sol system's atomic resources can sample about 1 straw to a cubical haystack comparably thick to our galaxy. Go to 1,000 and that swallows up observed cosmos resources. (Remember, the first pivotal case is Darwin's pond or the like and the question is to pull out of available physical, thermodynamics and chemical interactions, a plausible framework for blind watchmaker thesis evo that ends in a gated, encapsulated metabolising, protein using cell with coded D/RNA and von Neumann self replication all to be explained. Enormous functionally specific complex organisation and associated information.) We know from the dynamics of complex interactive systems exhibiting FSCO/I, that correct organisation sharply constrains possible configs leading to isolated islands of function with vastly more non functional possibilities. Similar sparse search challenges obtain on the case of moving from one island to a different archipelago, i.e. a novel body plan or a few dozen. So, no we do not need to calculate p(T|H) though we can work back to it from information metrics that reveal what amount of real world exploration of e.g. proteins in AA space is possible and recorded across the world of life. The observed pattern is well known, thousands of diverse structurally isolated protein fold-function clusters, a lot of which have only a few. That, in light of sparse search, points to only a limited role for stochastic generation of folds. Which means that the other main engine of high contingency must be seriously considered, design. There's been much huffing and puffing and blowing at Dembski's CSI metric, but in the end all it needed to do was to establish that we are dealing with an info beyond a threshold situation. The info can be empirically estimated, as can reasonable threshold values. The result is, that the workhorse molecules of life are grossly unlikely to emerge by blind chance and/or mechanical necessity, and without hundreds of diverse proteins, no living cell. FSCO/I, on the other hand, routinely comes about by intelligently directed configuration. KF kairosfocus
PaV: What you want to allege are available are a whole host of viable intermediates. Where, in the fossil record, or among extant species, do we find such “intermediates”? Nowhere. This is a bit off-topic, but you can look at adaptations in cetaceans for some pretty obvious intermediates. Zachriel
It is amusing to see this discussion. Some people don't seem to realize that every time they post their comments they do in practice exactly what they are trying to refute theoretically, i.e. they infer design by reading their interlocutors' comments. Since it is possible to tell jibberish from meaningful text, so it is also possible to tell functional protein sequences from non-functional. That is objective science. Functions can swap or co-opt, true. But how likely is that in practice given the sparseness of functionality in protein state space? Keith, either your fitness should encode functional information (irrespective of how we measure it) or you have a blind unguided search. Your appeal to selection does not save the day IMHO. What is fitness? How do you define it? I am sure as soon as you start defining it in biological context in practice, you will have to encode functional information in there if you want to make it practically feasible. Data without the Turing machine is meaningless and so is the machine without data it is designed to process. EugeneS
PaV: However, that the starting point of their calculations is always using the assumption of uniformity of mutations along the string—at least when we’re dealing with SNPs—this implicity demonstrates that the assumption of population geneticists is that a uniform probability distribution applies to the genome. Uniformity of mutation is not the same as uniform probability distribution as applied to the genome. PaV: The first was “Origin of Species” when Darwin dares to say that “species give rise to genera, genera to families, families to orders, and orders to classes.” (from memory) This is just silliness. Why? Because he has no justification whatsoever for stopping at “classes”! Don't see that quote anywhere. In any case, Darwin doesn't stop at "classes", but considers whether "the theory of descent with modification embraces all the members of the same great class or kingdom." Then he goes so far as to consider whether "all animals and plants are descended from some one prototype." You may want to reread 'Origin of Species'. It is considered one of the most important scientific works in history. PaV: Evolution is blind and random. Natural selection tends to be nearsighted, but is more than capable of directing adaptation. fifthmonarchyman: step one… Remove the sequence from it’s context and represent it as a serious of numeric values. step two… see if an algorithm can reproduce the pattern in those values by any means whatsoever sufficiently enough to fool an observer. That doesn't remove the necessity of background knowledge. The original sequence is just the original sequence encoded. Zachriel
keith s: Before the insults, try thinking things through. Where is the heritable variation in that scenario? The whole point of the analogy is to highlight that NS ONLY functions when something of value has been arrived at. If the phrase "methinks it is a weasel" is essential to life, then all you have are dead descendants. Nothing is inherited until such time as the entire phrase is arrived at---randomly!!! What you want to allege are available are a whole host of viable intermediates. Where, in the fossil record, or among extant species, do we find such "intermediates"? Nowhere. Show me those "intermediates" and you will make me a believer in Darwinism. But---speaking of "cognitive dissonance"---you know, there is something called the "Cambrian Explosion." PaV
You say to make my case after I deal with the inadequacies of the my monkey case.
Here's one: in the real world fitness landscapes aren't points of perfect fitness surrounded by field of zero fitness. wd400
gpuccio said, The interesting point, however, is that the algorithm increases the computed complexity of the same pre-defined function: being equal to the binary digits of pi. It cannot generate complexity linked to a new, original function not coded, either directly or indirectly, in its software. I say, Another interesting thing is that the algorithm just keeps pluggin along for eternity. Only a non computable conscious agent has the ability to halt the program and discover that anything whatsoever of interest has been produced at all. Peace fifthmonarchyman
The first was “Origin of Species” when Darwin dares to say that “species give rise to genera, genera to families, families to orders, and orders to classes.” (from memory) This is just silliness. Why? Because he has no justification whatsoever for stopping at “classes”! He had a pretty good reason -- there were no Phyla is the classification used at the time. I guess he could've gone to Kingdom, but don't quite see why you'd throw a book aside for not reaching the end of a series of anmes.
R.A.Fisher, the architect of what we know as neo-Darwinism, formulated this “fundamental theorem.” Do you know what this “theorem” is based upon?
.It's based on some observations about how fitness changes in a population relative to genetic diversity. wd400
PaV, It's hard to believe that you actually have a degree in biology. School must have been a nightmare of cognitive dissonance for you. keith s
PaV,
You say to make my case after I deal with the inadequacies of the my monkey case. However, there are no inadequacies.
You must be joking. Here is your monkey example:
Here’s another way of looking at it: you have a million monkeys typing away at a typewriter, and every time that they don’t come up with “methinks it is a weasel,” you throw it away. How does that help the monkeys?
Where is the heritable variation in that scenario? keith s
Zac said Based on that, you have to calculate the information gain for the sonnet by subtracting all of Shakespeare’s background knowledge, I say, stay tuned...... I believe there way to separate original CSI in the sonnet from the CSI that comes from background information. step one... Remove the sequence from it's context and represent it as a serious of numeric values. step two... see if an algorithm can reproduce the pattern in those values by any means whatsoever sufficiently enough to fool an observer. Of course with the understanding that the algorithm can't reference the original string. I've been playing around with this for a few weeks and so far it seems to work. Peace fifthmonarchyman
keith s: You say to make my case after I deal with the inadequacies of the my monkey case. However, there are no inadequacies. R.A.Fisher, the architect of what we know as neo-Darwinism, formulated this "fundamental theorem." Do you know what this "theorem" is based upon? PaV
keith s: P.S. In my life there have only been three books I've thrown down in disgust. The first was "Origin of Species" when Darwin dares to say that "species give rise to genera, genera to families, families to orders, and orders to classes." (from memory) This is just silliness. Why? Because he has no justification whatsoever for stopping at "classes"! If you can't get 'higher' than a "class," then how do you get a "phylum"? So, where do the phyla come from? Are they there from the beginning? If so, how did they form? Well, of course, Darwin thinks he's off the hotseat because at the end he says: "There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one . . ." Please explain who is doing this "breathing." Darwin doesn't. And then, so many editions later---when no one is watching, he drops the phrase. The second book was Dawkin's The Blind Watchmaker when, after 40 pages or so of wandering around, he, out of nowhere, claims that "if the ant took one small step in the right direction and was rewarded, it could arrive at the fox in no time at all." (paraphrasing) This, too, is nonsense. How do you reward the biomorph, and with what.? But the worst of all is: "in the right direction!!! But NS is blind. Evolution is blind and random. There is NO direction! And the third book I dropped in disgust was Ernst Mayr's What Evolution Is. Here it was, here was the grandmaster at work, having laid the foundation for the transmutation of species and . . . . . . . what do we get? Gobbledygook. Hemming and hawing, kind of this, and throw in that, mix it up---you know, like a "tornado passing through a junkyard and producing a 747"----words of atheist Sir Fred Hoyle. So, please, if you can, explain to us how evolution takes place. Give us the steps, show us examples. And, of course, we'll be very interested in all the "intermediate forms" that Darwinism supposes. PaV
PaV:
I have a degree in biology from UCLA.
Then you have no excuse for not understanding evolution better than you do. Evolution works via heritable variation and selection (plus drift). Heritable variation is completely missing in your monkey example, and you're modeling the fitness landscape as absolutely flat with a single sharp peak. P.S. Yes, I know about the fundamental theorem of natural selection. Please make your case, after you have dealt with the inadequacies of your monkey example. keith s
keith s thinks it's our problem that he cannot provide H. No wonder he's an evo Joe
keith s: I have a degree in biology from UCLA. Do you know what the "fundamental theorem of natural selection" is, and who developed it? There's a follow-up. Beware. PaV
DNA_Jock:
…you have to be really, really, really cautious if you are applying a post-facto specification to an event that you have already observed, and then trying to calculate how unlikely that specific event was. You can make the probability arbitrarily small by making the specification arbitrarily precise.
From this statement, I would conclude you haven't read Dembski's NFL book. What is needed are two things: (1) recognition of the pattern, and (2) knowledge of the mechanism by which the pattern is formed---IOW, you have to be able to calculate the probability of the "pattern" happening by 'chance' given the mechanism utilized in developing the "pattern." At Las Vegas, they know the "mechanism," and they know what "patterns" are to improbable to be happening by chance. In NFL, by Dembski, he uses the Caputo example of ballot tampering. And he calculates the odds of a 'Democrat being place first on a ballot of "x" names' and so forth. Caputo, I believe, was convicted on probabilities calculated in just this way. The problem that has been thrown in the face of Dembski is this: he has no basis upon which to assume that the DNA string of nucleotides in the genome represent a i.i.d--a uniform distribution. And since he has no assurance of said distribution, the NFL theorems do not, and cannot, apply. I will now present PROOF that the genome is, indeed, uniformly distributed across genome space!!!!! Trumpets, please!!! Drum roll!!!! In the over ten years that UD has been around, no one has said that the mutation rate of an organism's genome is NOT uniformly distributed. When population geneticists do their calculations here, they assume that the mutation rate is free to occur throughout the entire range of the genome. Now, it is true, that qualifications can, and must, be made. However, that the starting point of their calculations is always using the assumption of uniformity of mutations along the string---at least when we're dealing with SNPs---this implicity demonstrates that the assumption of population geneticists is that a uniform probability distribution applies to the genome. The only time they would contest this would be if they thought it would somehow undermine ID. IOW, there's a tendency to be disingenuous. PaV
PaV,
Here’s another way of looking at it: you have a million monkeys typing away at a typewriter, and every time that they don’t come up with “methinks it is a weasel,” you throw it away. How does that help the monkeys?
If that's how you think evolution works, then no wonder you're a IDer. Please read an introductory textbook on evolutionary biology, PaV. keith s
PaV In Boston this week, some man named “Paul Revere” won the state lottery. You’re response is: “Of course!” Isn’t this a silly way of looking at probabilities? Your answer highlights the lack of understanding of probability by Creationists. Assuming only one "Paul Revere" bought a ticket his chances of winning were identical to everyone else who bought a ticket. If you were predicting before the draw that PR would win his chances would be 1/tickets sold. You guys look at one result after the fact then confuse it with a before the fact prediction and claim "ZOMG that result is too improbable it must be designed!!" You could make the same erroneous claim with anyone who won. Adapa
PaV: I’ve already stated that NS does nothing more than “eliminate” successors. The process of building up is still “random.” We can show that such a process can find solutions to complex problems. Zachriel
Let me state the obvious. KF doesn't want to calculate a true P(T|H) for a biological phenomenon because he can't do it. This shouldn't be a surprise at all. Dembski introduced the idea of design detection based on P(T|H) at least as early as 2001. Thirteen years ago! Imagine if it had actually worked. By now, there would have been dozens (at least) of worked-out examples showing that various biological structures were designed. Dembski himself would have done a bunch -- CSI was his baby, and he would have wanted to demonstrate its power. Instead, nothing. No worked-out examples. In fact, Dembski himself appears to be (understandably) ashamed of CSI. He isn't working on it and he doesn't use it. It barely gets mentioned in his new book. Dembski gave up and is focusing his attention on his "search for a search" stuff with Marks. CSI failed for a lot of reasons, but perhaps the most embarrassing was that Dembski himself couldn't calculate it, because he couldn't calculate P(T|H) by his own definition of H. KF can't either, which is why he will dodge the question. keith s
keith s:
The dFSCI number simply confirms that obvious fact, using a calculation that was developed and understood long before gpuccio was born.
Indeed. Thus giving us confidence when it is not so "obvious." PaV
Adapa:
Creationists are notorious for coming up with really stupid ideas but demanding that science provide a numerical probability and specific steps for evolutionary changes that happened hundreds of millions of years ago has to be among the dumbest. We have ample physical evidence that the events did indeed occur and the mechanisms that caused them. That makes the probability of occurrence 1.0.
In Boston this week, some man named "Paul Revere" won the state lottery. You're response is: "Of course!" Isn't this a silly way of looking at probabilities? PaV
keith s:
We already know that 600-character posts or sonnets are not formed by pure random variation with no selection.
The straightforward meaning of the sentence somewhat eludes me. I think you're saying that a "sonnet" has been "selected" for. But it's "artificial" selection, and not "natural" selection. Darwin equates the one with the other. But is he right? I've already stated that NS does nothing more than "eliminate" successors. The process of building up is still "random." Here's another way of looking at it: you have a million monkeys typing away at a typewriter, and every time that they don't come up with "methinks it is a weasel," you throw it away. How does that help the monkeys? The only thing that could help the monkeys is if you substituted keys: e.g., you replace the letter "y" with "ea," you substitute the letter "x" with "et," and you substitute the letter "p" with "it", etc. However, this involves "active" use of intelligence. PaV
Adapa:
Creationists are notorious for coming up with really stupid ideas but demanding that science provide a numerical probability and specific steps for evolutionary changes that happened hundreds of millions of years ago has to be among the dumbest. We have ample physical evidence that the events did indeed occur and the mechanisms that caused them.
You only think that you do. However the peer-reviewed literature is devoid of blind watchmaker explanations. You are making stuff up, as usual. Joe
KF:
KS: you can analytically deduce log (p(T|H) and see that it is an information metric.
You can't take the log of P(T|H) unless you know the value of P(T|H). Compute the value of P(T|H) for a biological phenomenon, taking "Darwinian and other material mechanisms" into account, as required by Dembski. Show your work. You claim to be able to do it, so why not do it, for once? keith s
gpuccio: So, there is no doubt that Shakespeare used a lot of data and of data processing, like any of us, but what he did with those data would have never been possible as a simple algorithmic processing of the data themselves. That's your claim, and you may be correct; but you argue that an algorithm can't generate a sonnet, but restrict the algorithm from having access to the same background information as Shakespeare. Based on that, you have to calculate the information gain for the sonnet by subtracting all of Shakespeare's background knowledge, which was presumably quite extensive. Shakespeare knew Marlowe. Zachriel
Joe Unguided, gradual evolution posits incremental step-by-step processes to produce the diversity of life and its diversity of intricate systems and subsystems. In the absence of those steps there needs to be probabilities that the steps can occur and in the sequence required. And in the absence of that all you have is a glossy narrative that rivals Shakespeare but doesn’t belong in science. Creationists are notorious for coming up with really stupid ideas but demanding that science provide a numerical probability and specific steps for evolutionary changes that happened hundreds of millions of years ago has to be among the dumbest. We have ample physical evidence that the events did indeed occur and the mechanisms that caused them. That makes the probability of occurrence 1.0. Can you imaging demanding that a geologist provide the exact probability calculations and day by day height measurements for the formation of the Alps or else mountain building by plate tectonics is falsified? That's exactly how stupid this latest demand is. IDers are the only ones whose argument relies on the precise calculations of unknowable probabilities. Yet another reason they are laughed at by established science. Adapa
Zachriel:
Instead of what you consider non-functional steps, we could have a population of words that are ruthlessly selected for function, no close matches allowed. Do you think we could evolve some long words by this process?
Yes, if someone wrote a program to evolve words by whatever means, I am sure the program would do so if the programmer was competent. Yes, if organisms are intelligently designed to evolve long proteins, then they should be able to do so if the intelligent designer was competent enough. Next :razz: Joe
Unguided, gradual evolution posits incremental step-by-step processes to produce the diversity of life and its diversity of intricate systems and subsystems. In the absence of those steps there needs to be probabilities that the steps can occur and in the sequence required. And in the absence of that all you have is a glossy narrative that rivals Shakespeare but doesn't belong in science. So tell us- what is H and show your work. Lead by example, for once. Joe
This is too funny as evos are oblivious to the fact that they need to provide the H in P(T|H) and they think that actually helps them! Joe
wd400 @356
you can analytically deduce log (p(T|H) and see that it is an information metric
What? It's a log transform of probability. Are you really saying that every time a statistician works in log-space (because it's easier to take sums than products, and it can prevent underflow) they start working on "information"?
In all likelihood, yes. Although, if anything we use the partial derivative of that as our information. Bob O'H
Reality @ 357. Rant noted. Barry Arrington
F/N: I think it would be worth the pause to watch: http://www.youtube.com/watch?v=d2afuTvUzBQ&feature=related kairosfocus
GP, I think the problem here is that on evolutionary materialism contemplation must reduce to computation, but the deterministic mechanistic side of algors is not creative and the stochastic side is not creative enough and powerful enough to account for FSCO/I and particularly spectacular cases of dFSCI such as the sonnets in question. KF kairosfocus
Barry, since you're likely relying on this: "(b) trying to give the false impression that the victim trying to defend himself is the one who started the quarrel.", maybe you can show that kairosfocus and other ID-creationists are the victims and didn't "start the quarrel"? I have been commenting here for a short time. kairosfocus has been spewing his malicious, mendacious, sanctimonious, libelous, hypocritical, falsely accusatory attacks against "evomats" and their "ilk" and "fellow travelers" for a long time. kairosfocus, you, and the other ID-creationists have been starting and perpetuating quarrels (and worse) from the moment that you and your "ilk" first tried to 'wedge' your theocratic religious agenda into science, public education, and politics. Reality
you can analytically deduce log (p(T|H) and see that it is an information metric</blockquote? What? It's a log transform of probability. Are you really saying that every time a statistician works in log-space (because it's easier to take sums than products, and it can prevent underflow) they start working on "information"?
wd400
PS: If you had bothered to consider context, you would have seen that I have not made empty assertions but can back up every point I have made. Your turnabout based on snip and snipe is revealing. Especially as the point being defended is a schoolyard taunt mockingly dismissive twisting of FSCO/I, a descriptive term that I happened to highlight as an observable fact just a few hours ago, here. kairosfocus
Zachriel: I am not sure what to say. I agree on many of your last comments addressed to me, about Shakespeare and similar. To be more clear about my personal position on the role of consciousness in algorithmic cognition, I want to say that I absolutely recognize that Shakespeare had a lot of information coming from his environment, his personal history, his experiences, and so on. Much of that experience can certainly be elaborated in algorithmic ways, and there is no doubt that our conscious processes use many algorithmic procedures to record and transform many data. My point is different. My point is that being conscious, and having the conscious experience/intuition of meaning (for example, the intuition that something exists which can be considered true, and the basic intuitions of logic, and many other things) and of purpose (the subjective experience that things can be considered desirable or not, and that each conscious representation has a connotation of feeling) and of free will (that we can in some mysterious way influence what happens to us and to the world about us in alternative ways according to some inner choices), all that has a fundamental role in our ability to cognize, to build a map of reality, to output our intuitions to material objects, to design. So, there is no doubt that Shakespeare used a lot of data and of data processing, like any of us, but what he did with those data would have never been possible as a simple algorithmic processing of the data themselves. It was the result of how he represented those data in his consciousness, of what he intuited about them, of how he reacted inwardly to them, of how he reacted outwardly as a consequence of his inner representations and reactions. All those steps depend on the simple fact that in conscious beings data generate conscious representations and that those conscious representations generate new data outputs. A non conscious algorithm lacks those steps, and is therefore confined to algorithmic processing of data. gpuccio
Reality, enough has been said to show the point as just again outlined to KS, which holds for you too. KF kairosfocus
KS< you can analytically deduce log (p(T|H) and see that it is an information metric. Information being observable through various means can then feed in back ways. Where also stochastic patterns can be used to project back to underlying history, statistical factors and dynamics at work. Indeed, that is how info in English text considered as a stochastic system, is estimated. For simple case E is about 1/8 or typical English text. KF kairosfocus
Reality: Please do some homework on dynamic-stochastic systems, observability of systems and the issue of inferring path in phase space from observable variables, and more. Think about brownian motion as an observable and then about random walk of molecules in a body of air that is drifting along as part of a wind as what may be inferred, and inde3ed ponder how Brownian Motion contributed to acceptance of the atom as a real albeit then invisible entity. KF kairosfocus
DNA_Jock at #336: Thank you for you good comments about that paper. Of course, I don't agree with all that you say, and I really want to discuss that paper in detail with you, but I think that I need some time and serenity to do that, so I will not answer your points immediately. I will try however to take the discussion as soon as possible. For the moment I have not much time, and I still want to monitor the general discussion in this thread, until it is still "hot". :) Any thoughts on my #323? I ask in a very open manner, because I have tried there to outline some very general points which are certainly very open to discussion, but IMO extremely important. I just wondered if you have specific opinions on some of them. gpuccio
KF:
KS:going to information metrics, through log reduction, that opens up a world of direct and statistical info metrics, as you full well know or should know. Game over.
KF, you can't take the logarithm of P(T|H) without calculating P(T|H). Game on. keith s
Thank you Reality @ 345 for your demonstration of Darwinian Debating Devices #2: The “Turnabout” Tactic. Barry Arrington
kairosfocus, is your gibberish supposed to mean something? And can you show where I appealed to ANY authority? You've been challenged to "calculate a true P(T|H) for a biological phenomenon — one that takes “Darwinian and other material mechanisms” into account, to borrow Dembski’s phrase." Why are you so afraid to "Deal with the substance"? Reality
R: FYI, appeal to was it in a peer reviewed journal article (actually closely linked terms are and the concept is routine in engineering) is in fact an appeal to authority as gate-keeper. KF kairosfocus
kairosfocus, you play your malicious, mendacious, libelous, schoolyard bully mental level taunt games: "...never mind what evo mat ideologues in lab coats and their fellow travellers want to decree. KF" "The resort to such at this late date is a mark of patent desperation. KF" "So, while it is fashionable to impose the ideologically loaded demands of lab coat clad evolutionary materialism and/or fellow travellers..." "I no longer expect you to be responsive to mere facts or reasoning, as I soon came to see committed Marxists based on their behaviour..." "...uniformity reasoning cuts across the dominant, lab coat clad evolutionary materialism and its fellow travellers..." "Not quite as bad as Judge Jones viewing Inherit the Wind to set his attitude on the blunder that this accurately reported the Scopes affair, but too close for comfort. And don’t try to deny, I went through that firsthand back in the day and have seen the abuse continue up to uncomfortably close days. Do you really want to go to a rundown of infamous manipulative icons of evo mat ideology dressed up in a lab coat? KF" Yet you hypocritically spewed this "...playing the schoolyard taunt game simply shows a mental level akin to schoolyard bullies — especially when that is to try to dismiss an observable fact as simple as an Abu 6500 fishing reel. I suggest you avoid such in future. KF" And this: "Personalities via loaded language only serve to hamper ability to understand; this problem and other similar problems have dogged your responses to design thought for years, consistently yielding strawman caricatures that you have knocked over." I suggest you avoid such in future. Reality
Adapa, you full well know you resorted to a schoolyard taunt tactic, as all can see by scrolling up. Twisting terms to create mocking taunts -- and here in the teeth of a direct demonstration of the described reality -- speaks volumes and not in your favour. Now you have resorted to the brazen denial when called out. Please think about the corner you are painting yourself into. KS: By going to information metrics, through log reduction, that opens up a world of direct and statistical info metrics, as you full well know or should know. Game over. KF kairosfocus
KF, As we keep telling you, it is utterly trivial to go from P(T|H) to log P(T|H) and back again. Logarithms and antilogarithms are easy. P(T|H) is hard. If you can't calculate P(T|H), you can't take its logarithm. You need to show that you can calculate a true P(T|H) for a biological phenomenon -- one that takes "Darwinian and other material mechanisms" into account, to borrow Dembski's phrase. You say you can do it. Let's see you back up your claim. keith s
Kairosfocus, I take your response @340 as an assertion that you can, in fact calculate log p(T|H) for a biological. Care to demonstrate? Note that not one of your numerous comments-closed-FYI-FTR posts do this. DNA_Jock
kairosfocus Adapa, playing the schoolyard taunt game simply shows a mental level akin to schoolyard bullies — especially when that is to try to dismiss an observable fact as simple as an Abu 6500 fishing reel. I suggest you avoid such in future. KF All I did was point out that the parameter you claim is "amenable to observation and even quantification" has never been used in the scientific community. Not once, not ever. I would have pointed that out to you on one of your many identical threads crowing about how wonderful FSCO/I is but you bravely closed comments in every one. Adapa
DJ:Actually not, as it is fairly easy to get information numbers for DNA, RNA and even proteins, as has been done. That is not the full info content of life forms, but it is a definite subset and gives the material result already. Believe you me once I saw the power of transformations to move you out of a major analytical headache, that was a lesson for life. Of course evaluating Lapalace transforms is itself a mess but the neat thing is that this is reduced to tables that can be applied, and integrals and differentials have particularly simple evaluations. Indeed, in evaluating diff eqn solutions using auxiliary eqns, you are using such transforms in disguise -- why didn't they just use the usual s or p and done. Similarly, going to operators form is the same thing. (I love the operator concept, the Russians make some real nice use of it.) The transformation to information is similarly though much less spectacularly, a breakthrough. For info is amenable to both evaluation on storage capacity of media and by application of statistics of messages. The statistics of the messages, whether text in English or patterns of AA residues for proteins etc, can then tell us a lot about the real world dynamic-stochastic process and the adaptations to particular cases involved. (That is what I was hinting at in talking on real world Monte Carlos. Down that road, systems analysis.) KF kairosfocus
kairosfocus said: "Descriptive terms linked to observables and related analyses and abbreviations do not gain their credibility or substance from appeals to authority. Deal with the substance..." LOOK WHO'S TALKING! I DID NOT and DO NOT make ANY appeals to authority. YOU, on the other hand, CONSTANTLY make appeals to authority, and YOU portray YOURSELF as THE AUTHORITY ON EVERYTHING. And YOU are AVOIDING the "substance" of the NUMEROUS, SOLID REFUTATIONS of your DICTATORIAL, INCORRECT, and FALSELY ACCUSATORY logorrhea. Reality
kairosfocus: Z, the facts of how Weasel was used manipulatively for literally decades speak for themselves. We read Dawkins. He doesn't say it's a complete model of evolution. You didn't answer. Instead of what you consider non-functional steps, we could have a population of words that are ruthlessly selected for function, no close matches allowed. Do you think we could evolve some long words by this process? Zachriel
kf @ 333 Fascinating stuff. But you accused me thus "Notice how D-J persistently leaves off he inconvenient little log p(T|H)" Here's my point: if you can calculate p(T|H), you can calculate log p(T|H). and vice versa. Pointing out that you, kairosfocus, CANNOT calculate p(T|H) is utterly equivalent to pointing out that you CANNOT calculate log p(T|H). For any biological. The log transformation brings me no inconvenience whatsoever: it is utterly irrelevant. Regaqrding your use of fits to derive log p(T|H), see my comment re Durston in 336 above. DNA_Jock
gpuccio, Thank you for the very interesting Hayashi 2006 PLoS ONE reference. I had seen their figure 5 before, but I did not realize the extent to which they had experimental support for their view of the landscape. This paper is quite the show-stopper for two assertions that are repeatedly made at UD. 1) There are islands of function. Apparently not:
The evolvability of arbitrary chosen random sequence suggests that most positions at the bottom of the fitness landscape have routes toward higher fitness.
I reckon that "most" smacks of mild over-concluding here, but we can say, conservatively, that over ~1% of random sequences have routes towards higher fitness. So much for "islands". 2) We can use Durston's measures of fits to estimate probabilities, as kairosfocus does in his always-linked... No, we can do no such thing. Per Hayashi, once we move to higher fitness, there are large numbers of local optima with varying degrees of interconnectedness. These local optima are constrained in a way that differs dramatically from the lower slopes of the hill. This is a total killer for any argument that tries to use extant, optimized proteins to estimate the degree of substitution allowed within less-optimized proteins. Bottom-up approaches are the only valid technique. It turns out that I was far more right than I thought I was... F/N: I note in passing that k=20 deep-sixes another ID-trope: "overlapping functionality or multiple constraints prevents evolution". Here each residue interacts with, on average, 20 others. Evolution, unlike a human designer, is unfazed. DNA_Jock
Z, the facts of how Weasel was used manipulatively for literally decades speak for themselves. Not quite as bad as Judge Jones viewing Inherit the Wind to set his attitude on the blunder that this accurately reported the Scopes affair, but too close for comfort. And don't try to deny, I went through that firsthand back in the day and have seen the abuse continue up to uncomfortably close days. Do you really want to go to a rundown of infamous manipulative icons of evo mat ideology dressed up in a lab coat? KF kairosfocus
Adapa, playing the schoolyard taunt game simply shows a mental level akin to schoolyard bullies -- especially when that is to try to dismiss an observable fact as simple as an Abu 6500 fishing reel. I suggest you avoid such in future. KF kairosfocus
D-J: That is actually fairly frequent in modelling and analysis. An abstraction or situation in one form is not very amenable to calculation or visualisation, but with a transformation, you are in a different zone where doors open up. Not totally dissimilar to integration by substitutions. Once we know something is information, we have ways to get reasonable values. And oddly, that then enables an estimate of the otherwise harder value by inverting the transformation in this case. (Coming back through an integration procedure is often a bit harder.) For instance, working with complicated differential equations can be a mess. Reduce using Laplace Transforms and you are doing algebra on complex frequency domain variables. Push another step and you are doing block diagram algebra. A bit more and you are looking at pole-zero heavy stretchy rubber sheet plots and wireframes, which allow you to read off transient and frequency response. A similar transform gets you into the Z domain for discrete state analysis with the famous unit delay function and digital filters with finite and infinite impulse responses, with their own rubber sheet analysis . . . just watch out for aliasing. (Did you forget that I spent years more in that domain than the time domain?) As would be obvious, save for the operating hyperskepticism that is in the driving seat. But then in the policy world over the past few weeks, I have been dealing with a few cases like that . . . and what drives me bananas there is the, I don't like diagrams and graphs retort to an infographic that reduces a monograph worth of abstruse reasoning to a poster-size chart. Adapa: Why are you drumbeat repeating what has been adequately answered long since by something open to examination? When a fact can be directly seen, there is no need for peer review panels to affirm it. And in this case, FSCO/I and dFSCI are simply abbreviations of descriptive phrases, and in fact they trace to Wicken's wiring diagram, functionally rich organisation discussion of 1979 and Orgel's specified complexity discussion of 1973 as you full well should know. The phenomenon is a fact of observation as blatant as the difference between a volcano dome pushing out ash including sand into a pile, and a few miles away, a child on a beach made from that same dome, building a sand castle. KF kairosfocus
Evolutionists still can't provide any probabilities for their position which relies solely on probabilities. And then, like little children, they try to blame ID for their FAILures. Joe
Adapa:
If FIASCO is so amenable to observation and even quantification then why has no one ever observed or quantified it in any real world biological cases?
We have provided one peer-reviewed paper that does so. AGAIN Crick defined biological information and science has determined it is both complex and specified. Joe
Wrong again Zachriel- Weasel shows how a TARGETED search is faster than a random walk. Joe
fifthmonarchyman: If fact the paper I linked demonstrates that humans are quite good at it. Hasanhodzic shows that people are good at distinguishing order. Market returns are not random, but chaotic. fifthmonarchyman: Complex Specified information is not computable That's the question, not an answer. If you have such a proof, we'd be happy to look at it. fifthmonarchyman: In fact I’m not sure the majority of critics have fully grasped that Darwinian evolution is simply an algorithm and is fully subject to any and all the mathematical limitations thereof. While models of evolution are algorithmic, that doesn't mean evolution is algorithmic. In particular, evolution incorporates elements from the natural environment. A simple example may suffice. Algorithms can't generate random numbers. However, an algorithm can incorporate information from the real world, including randomness. fifthmonarchyman: Actually what is objective is the number. By definition, a value is not objective if it depends on the individual making the measurement. fifthmonarchyman: Don’t be offended if I don’t respond to you as much as you would like. You're under no obligation to defend your position. Readers can make of that what they will. gpuccio: Because we know well that no existing designed algorithm, at least at present, can generate a piece of text of that length which has good meaning. Unless the text is already in the algorithm itself. It isn't necessary to have the text in the algorithm, though you do have to have a dictionary, rules of grammar, rhyming, scansion, poetic structure, word relationships, etc. No more than what Shakespeare had in his own mind. Let's say we had an oracle that can recognize whether a string of words has a valid meaning in English. "How camest thou in this pickle?" What the heck does that mean? Nevertheless, it got plenty of laughs on the Elizabethan theater. "I will wear my heart upon my sleeve." Anyway, let's say we have such an oracle. We might put our phrases before an Elizabethan audience and measure the applause, the same oracle that guided Shakespeare in his writing. Also given that phrases such as "the king" has more meaning than "king" as it is more specific. This is our gargantuan encyclopedia of phrases. Now, to make this fit into a computer, let's reduce our encyclopedia to a subset of this gargantuan encyclopedia. Certainly, it would be even harder on the algorithm, but easier on our memory. fifthmonarchyman: I can always make a designed algorithm which can output dFSCI, if I put enough complexity in the algorithm. Shakespeare had plenty of 'dFSCI' in his mind before writing any sonnets. fifthmonarchyman: When we say that algorithms are incapable of producing CSI. It is always assumed that cheating is not permitted. You permit the Shakespeare sonnet writer what you won't permit to the computer algorithm. fifthmonarchyman: Yet every proposed algorithm that yields false positives does just that. No, not every. Some generate solutions to external problems. gpuccio: The important point is: any algorithm which generates meaningful complex language must have that language in itself, either in the oracle or in the rest of the algorithm. Sort of like Shakespeare did. gpuccio: I think the most important point of all, which goes beyond the discussion about weasel or similar, is: what are the intrinsic limitations of an algorithm, however complex? If Shakespeare didn't know words and rhyme, he wouldn't have written sonnets. gpuccio: And the real meaning of meaning and purpose cannot be coded, because they are conscious, subjective experiences, and only those beings who have those experiences can recognize them. Sure. So an unfeeling algorithm could either mimic those feelings, or simply write about something else. "hate began here if a heart beat apart" kairosfocus: In short, here cumulative selection “works” by rewarding non-functional phrases that happen to be closer to the already known target. This is the very opposite of natural selection on already present difference in function. Dawkins’ weasel is not a good model of what evolution is supposed to do. It's not supposed to be a model of evolution. What it shows is that evolutionary search is much faster than random search. Instead of what you consider non-functional steps, we could have a population of words that are ruthlessly selected for function, no close matches allowed. Do you think we could evolve some long words by this process? Zachriel
kairosfocus F/N: dFSCI and FSCO/I as demonstrable facts amenable to observation and even quantification And still...
a Google Scholar search of the mainstream scientific literature for the last 10 years returns: ZERO scientific papers using “dFSCI” ZERO scientific papers using “FSCO/I” ONE scientific paper using “complex specified information” (CSI is too common an acronym) and that was Elsberry and Shallit’s disemboweling of Dembski’s popular-press published claims.
If FIASCO is so amenable to observation and even quantification then why has no one ever observed or quantified it in any real world biological cases? Adapa
Wait a sec! You are telling me that you CAN calculate log p(T|H), but you can't calculate p(T|H)? I can help with that. Rather, e can help with that. LMAO DNA_Jock
F/N 2: Notice how D-J persistently leaves off he inconvenient little log p(T|H) and the implication of this metric being info beyond a threshold thus opening up assessments of bio info that then address the testable result that things exhibiting FSCO/I (a relevant subset) are consistently designed, with say the cases of 15 protein families on record since 2007 in the literature thanks to Durston, as cases in point, just cf the infographic in the just linked? KF kairosfocus
F/N: dFSCI and FSCO/I as demonstrable facts amenable to observation and even quantification, here. With only one empirically reliable cause, intelligently directed configuration aka design. KF kairosfocus
The main one is that DFSCI is real and reasonably observable and quantifiable.
Yet still no calculation of p(T|H) for anything in biology. Oh well. DNA_Jock
DNA_Jock: OK, I can agree, but the fact remains that the oracle must be part of the algorithm, if the algorithm must work. I am not discussing here Dawkin's intentions or the interpretations of his intentions, I am not interested in that. The important point is: any algorithm which generates meaningful complex language must have that language in itself, either in the oracle or in the rest of the algorithm. I think the most important point of all, which goes beyond the discussion about weasel or similar, is: what are the intrinsic limitations of an algorithm, however complex? This is the point I have discussed here with fifthmonarchyman and which is related to Penrose's books about the Godel theorem and its consequences for theories of human cognition, and to the article by Bartlett about Turing oracles. My personal position is that conscious experiences have a fundamental role in human cognition and in the generation of original dFSCI. Therefore, a non conscious algorithm, however complex, has severe limitations if compared with a conscious cognizer. Of course, growing degrees of added information and of computational complexity can help a non conscious algorithm to simulate, to growing degrees, human cognition and the generation of dFSCI. But that comes always at the price of a higher increase in the algorithm than in the output. And it can never really generate new original specifications, for example new meanings which have not been in some way pre-coded, or new functions that have not in some way pre-defined. Why? Because a non conscious algorithm has no experience of meaning and no experience of purpose. It has literally no idea of what meaning and purpose are. And the real meaning of meaning and purpose cannot be coded, because they are conscious, subjective experiences, and only those beings who have those experiences can recognize them. There is only one scenario where an algorithm can apparently generate specified complexity higher than its own complexity. I have discussed that before. Let's say that we have a complex algorithm that can generate the binary digits of pi, by a complex computation. Let's say that the functional complexity of the algorithm is n bits. Now, the algorithm starts to work, and it starts top compute the digits of pi. At some point, it will have computed n+1 digits of pi. And, obviously, it can go on. At this point, the functional complexity of the outcome is apparently higher than the functional complexity of the algorithm which has generated it. And, going on, it can be made as higher as wanted. After all, the functional complexity of the output is equal to its complexity in bits, because there is only one binary string which corresponds to pi. But the point is, pi is an outcome that is computable algorithmically. OK, the algorithm to compute it is very complex, but if we elongate the outcome by increasing the number of computed digits, a time comes where the outcome is more complex than the generating algorithm. But then, and only then, our procedure to evaluate dFSCI must shift to the Kolmogorov complexity of the outcome. IOWs the dFSCI of the outcome, however long, becomes, from that moment on, equal to the complexity of the generating algorithm. IOWs the string of the generating algorithm becomes a "compression" of the outcome. The interesting point, however, is that the algorithm increases the computed complexity of the same pre-defined function: being equal to the binary digits of pi. It cannot generate complexity linked to a new, original function not coded, either directly or indirectly, in its software. So, the functional specification is the true marker of design, but the complexity is necessary to eliminate those simple pseudo functions that a conscious observer could apparently "recognize" in simple non designed configurations. gpuccio
DJ: Weasel as a case of cumulative targetted search has the information in from the first. KF MT: As a matter of fact, Weasel has often been promoted from the 1980's on in print and TV etc, as showing the creative powers of CV + DRS --> DWM --> TOL. Many, many people were persuaded thereby . . . as I can recall from how people spoke of it then and in the years since then. And provisions and caveats that eat up the point, should have led to the matter never having been raised in that way with such an example; I recall here the debate in physics edu about how legitimate it was to use instruments constructed on the premise of Ohm's Law to test the validity of said law, for students, and on how much should be said to them. But then, all of this is a side point at best. The main one is that DFSCI is real and reasonably observable and quantifiable. KF kairosfocus
gpuccio @ 308
Let me understand. So, the phrase “Methinks it’s like a weasel” was not in the algorithm, and came about by natural selection? Really interesting! Can you confirm that?
In addition to Me_think's reply at 317, I would add: “Methinks it’s like a weasel” was in the oracle, not in the search algorithm. If you want to discuss search algorithms, it is a good idea to (conceptually, at least) separate the searcher from the oracle. Dembski's “Deterministic Search” performs much, much better than the “Partitioned Search”, which reduces its “Active Information” by deliberately ignoring useful information provided by the oracle (his Partitioned Search ignores the oracle every time the oracle says "This letter is wrong"). The Weasel oracle only provides the Hamming distance to the target. Much less to go on. DNA_Jock
Really, Kairosfocus, you want to go there? Let me see if I have this straight. Dawkins writes a popular book, The Blind Watchmaker, in which he introduces a couple of toy examples to illustrate the power of differential reproduction. He analogizes from genes to "memes", introducing the idea that ideas can propagate by differential reproduction. icanhascheezburger follows. He also introduces a toy search, Weasel, in which he contrasts the performance of Monkeys at a typewriter with a hill-climbing algorithm. When he introduces it, he points out that evolution is not like this, saying
Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. One of these is that, in each generation of selective ‘breeding’, the mutant ‘progeny’ phrases were judged according to the criterion of resemblance to a distant ideal target, the phrase METHINKS IT IS LIKE A WEASEL. Life isn’t like that. Evolution has no long-term goal. There is no long-distance target, no final perfection to serve as a criterion for selection, although human vanity cherishes the absurd notion that our species is the final goal of evolution. In real life, the criterion for selection is always short-term, either simple survival or, more generally, reproductive success.
Dembski realizes that even this toy presents a problem for his CoI law. He claims, erroneously, that Weasel contains a latching mechanism. When it is pointed out that it does not, the true weaseling begins. My favorite: there was a latching mechanism in the TBW version of Weasel, but it was removed for the BBC show (where it is clear that correct letters are getting mutated). Massive butthurt ensues. So much so that Dembski and Marks use Weasel as an exemplar of a "Partitioned Search", which it very obviously is not, in their IEEE paper. Kairosfocus, who is apparently never wrong, modifies his "Latching mechanism" claim to quasi-latching, or pseudo-latching, and furthermore tries to defend the claim that D&M are correct to refer to Weasel as a Partitioned Search. Much hilarity ensues.
Corrections have been on record for many years. (Where also, if one examines the printed cases, released by Dawkins, whenever a letter becomes correct, it never reverts. Conveniently, the original code is not available. This phenomenon can be duplicated by creating code that mimics what Dawkins claims, and choosing “good” examples. This speaks to likely side tracks that evade the main issue already documented above.) The bottomline is simple: admissions that reveal the fallacy of irrelevance were right there in TBW right from the beginning, decades ago. The resort to such at this late date is a mark of patent desperation.
ROFLMAO If you want to see truly patent desperation, just enter "Question 10" in Uncommondescent's search box. DNA_Jock
Me_Think said, No one in his right mind thinks Evolution is searching for a pre-specified pattern, least of all Dawkins. I say Yet every proposed algorithm that yields false positives does just that. Don't you find that odd? peace fifthmonarchyman
Steve @ 315
Good points KF. I noticed that in Dawkin’s book as well.
Note my response @ 317 to gp and KF No one in his right mind thinks Evolution is searching for a pre-specified pattern, least of all Dawkins. Me_Think
gpuccio @ 308
Let me understand. So, the phrase “Methinks it’s like a weasel” was not in the algorithm, and came about by natural selection? Really interesting! Can you confirm that?
Of course not ! 'Nature Selection' (note the quotes) is just a piece of code in his program which mimics selection.His aim was to show how the statement can be reached faster with 'Natural selection' algorithm you forgot to quote my comment in full. I clearly stated this:
@294 He also noted that the “experiment is not intended to show how real evolution works, but does illustrate the advantages gained by a selection mechanism in an evolutionary process”. He didn’t say his program detects design.
You keep forgetting, evolution is NOT hunting for specific patterns kairosfocus @ 312 I am not promoting it at all. Note my response to gp. No one in his right mind thinks Evolution is searching for a pre-specified pattern, least of all Dawkins. Me_Think
gpuccio said, I can always make a designed algorithm which can output dFSCI, if I put enough complexity in the algorithm. Even original dFSCI, if I put that original dFSCI in the algorithm. I say Exactly. When we say that algorithms are incapable of producing CSI. It is always assumed that cheating is not permitted. I find it truly amazing that that simple obvious fact has to be constantly repeated. Recall that in this very thread Keiths proposed an algorithm that simply printed out already existing sequences as a way to create false positives. peace fifthmonarchyman
Good points KF. I noticed that in Dawkin's book as well. He mentions -briefly- that what follows is not a good example of evolution as he proposes it and then proceeds to write page after page describing a program that demonstrates that if an intelligently designed algorithm is designed to evolve to a target, it can reach that target. He managed to get this little confidence trick passed a lot of people apparently. A shell game shuffle in prose. steveO
KF: Yes, the weasel is a die hard animal! :) gpuccio
Reality at #302: You are really confused. I am sure you are in good faith, but you are really confused. If I have not been clear enough, and that is in part a cause of your confusion, I apologize. But believe me, I am trying my best to be clear enough. I will try again. The elimination of an algorithmic origin refers, as I have said many times, to any algorithm which could be available in the system or the time span we are considering. In general it refers to non designed algorithms, if our purpose is to exclude design completely. Or we can choose to accept some algorithms which are already part of the system, even if they are or could be designed, if our purpose is only to check if additional design was necessary to generate the output we observe. I will be more clear. Let's say that our system is our planet and the time span its whole existence of about 4 billion years. What we want to analyze is if all the life forms we observe today could be generated by non design mechanisms assuming a planet without any life at the beginning, and the time span available. In this case, we can only consider algorithms available in the original scenario, or which could have become availabe after that, always by non design mechanisms. That means that, before using NS as a possible mechanism, we have to explain how living beings which reproduce, or some equivalent thing, originated by non design mechanisms. IOWs we have to explain OOL before we can use life to explain the further evolution of species. But we can also accept original life in some form, like prokaryotes, as a given, ans ask if what happens after that, the evolution of biological information, can be explained by non design mechanisms. This is a perfectly correct question. In this case, we are no more asking, at least for the moment, if original life (prokaryotes, in particular, in this example) could originate by non design mechanisms: we just take it as part of the original system, and we can use it, and its algorithms of reproduction, as a part of out explanation of what happens after that. IOWs we can use NF, and see if it helps. Is that clear? Please, check what I wrote to you in post #228:
Many of these objections arise from the simple fact that you always ignore one of the basic points of my procedure, indeed the first step of it. See my post #15: “a) I observe an object, which has its origin in a system and in a certain time span.” So, in the end, the question about the algorithms can be formulated as follows: “Are we aware of any explicit algorithm which can explain the functional configuration we observe, and which could be available in the system and the time span?” So, if your system includes a complex designed algorithm to generate strings made by English words by a Dictionary oracle, then the result we observe can be explained without any further input of functional information by a conscious designer.
What is not clear in that? More about algorithms. I can always make a designed algorithm which can output dFSCI, if I put enough complexity in the algorithm. Even original dFSCI, if I put that original dFSCI in the algorithm. I can write an original piece of free verses, for example: "How wonderful, exciting and frustrating at the same time it is to comment at uncommon descent!" which is probably absolutely original as a piece of poetry (not so good, however!), and then write a simple algorithm which outputs it as printed text. OK, and so? My algorithm is designed, but above all the original dFSCI in it was designed by me. The weasel algorithm is something like that, only the phrase is not original, but it is in the algorithm. The algorithm could have simply given it as output. Instead, it tries to arrive at it through RV and intelligent selection based on the previous knowledge of the phrase itself. And guess what? It succeeds! Ah. the wonders of the darwinist mind. You say: "You say that the text I posted contains 2009 bits (of functional information?). According to the “ID” arbitrary boundary of 500 bits the text therefor contains plenty of CSI-dFSCI-FSCO/I to be labeled as intelligently designed no matter what it means, if anything." Yes, 2009 bits of functional information linked to the specification "a piece of text made with english words" and the length of the text. That's how I have made the computation. And yes, I infer that it is designed. Either directly or through a designed algorithm which includes as an oracle the english dictionary, and therefore is more complex than the text itself. Why do I consider the algorithm? It's simple. The text has no meaning, as a whole, in English. The single words, instead, are correct, therefore have a meaning as words. IOWs. the text can be considered as a list of English words. (I am not considering for the moment the structure in non rhymed words, which however is very easy to be obtained algorithmically). A list of objects, a random list of objects, is very easy to be obtained by a simple algorithm, if the algorithm has a list of those objects and simply selects randomly some of them. That's why a simple algorithm with an English Dictionary (which is very complex) can easily do the trick. Now, a conscious being can do the same thing intentionally: I can just write a list of English words that I know. That would be directly designed, while the list generated by a computer who selects form a digital dictionary would be indireclty designed. So, that output is designed anyway. To be consistent with my original formulation, the final judgement is as follows: If your original system does not include a computer with a program which can generate a list of words form a digital dictionary of English, than the text is certainly directly designed by some conscious being. If your original system includes all that, than we cannot say: the text could be the output of that program, or it could be the result of a direct act of writing from a conscious designer. We cannot say, because there is no difference in the output itself which can guide us. The scenario is different if we have a text of that length which has good meaning in English. Then with that length, I would be sure that the text is directly designed. Why? Because we know well that no existing designed algorithm, at least at present, can generate a piece of text of that length which has good meaning. Unless the text is already in the algorithm itself. In that case, we have only a printing algorithm. I am really amazed at this statement of yours: "How do you know that it has “no good meaning in English”? For all you know it could have plenty of “good meaning in English” to someone. Keep this statement of yours in mind: “And it’s not that I test the sonnet for being a sonnet. I observe that it is a sonnet, and I test how likely it is to have a sonnet of that length in a random system.” You say that you don’t test the sonnet for being a sonnet, but you “observe” and obviously judge whether something is a sonnet or not (and original, complex, or otherwise) by whether it has “good meaning in English” to you or not. You even said in regard to the text I posted: “It is a text made with English words, in non rhymed verses. But it is not a sonnet.” How do I know that it has “no good meaning in English”? Are you serious? If it has meaning for you, please explain what that meaning is. "Which, will be thy noon: ah! Let makes up remembrance What silent thought itself so, for every one Eye an adjunct pleasure unit inconstant Stay makes summer’s distillation left me in tears" Meaning? Bah! Are you kidding? This is obviously a list either of single words or, more probably, of pieces of phrases. Meaning means that the whole piece of text conveys a consistent information which evokes a clear cognitive experience in our mind. Not individual disjointed phrases which have been obviously taken from some pre-compiled list. The posts here have good meaning in English (if there are no errors or typos in them). Yes, even keith's. :) You say: "You say that you don’t test the sonnet for being a sonnet, but you “observe” and obviously judge whether something is a sonnet or not" Yes, and so?. A sonnet has specific formal characteristics, for example the number of verses. The text you gave is not a sonnet, and anybody can easily see it. It's like observing that a blue object is not red. Are you really saying what you are saying? You say: "You even said in regard to the text I posted: “It is a text made with English words, in non rhymed verses. But it is not a sonnet.” Yes, I am culpable for that. I gave a correct description of what I was seeing. This is no inference or procedure. It's simply a true observation. What is wrong in it? It is a text made with English words. Is that wrong? in non rhymed verses. They are verses. Not very good, not specific types of verses, but under any generic definition of verses they are verses. Did I miss the rhymes? But it is not a sonnet. It is not. Among other things, sonnets cannot be so long. So, what is the problem? And why do you sneak in, in parentheses: "(and original, complex, or otherwise)"? Those are other problems. I did not judge if it is original or not. The complexity I computed. That has nothing to do with what is simply observable (english words, no good meaning, not being a sonnet, and so on). Then you say: "Notice this in your conclusion: “…the conclusion is easy: the poem is certainly designed…”, even though upthread you said this: “And I will never infer design for a sequence which is the result of a random character generator.”" And I maintain that. Maybe I can clarify a point which could confound you. When I speak of a random character generator, I am referring to an easy way to simulate a random system. Here we have not defined which random system could explain the origin of a text. In a sense, we are simulating a real problem. The meaning of "a random character generator" is a computer program which outputs random individual characters, exactly as it could happen in some natural random system whose outputs can be considered as characters. So, while I am perfectly aware that a computer program which generates random character is a designed thing, but I implied that it could be accepted as a convenient source if random strings. A random character generator, however, has no added information about the strings it generates: that's why it is a random character generator. Instead, you conclude: "Well, guess what? The text ‘sequence’ I posted is the output of multiple random text generators that are called sonnet generators. What was that you said about no false positives?" This is really funny. Your "random text generator" contains the words, or more probably phrases, that randomly compose the text you gave. All the information in the text (which however is not a good meaning of the whole text) is already in the software. All the software does is to randomize the sequence of those pieces of information, and indeed that sequence is completely random, and that's why the text as a whole has no meaning. I correctly inferred that the text was designed, either directly or indirectly by some software which was more complex than the text itself, and therefore designed. False positive? Why? It is a true positive. My inference is completely correct. Your "last word": "You also need to rethink this bold statement of yours: “My statement has always been simple: the procedure works empirically, as it is, with 100% specificity. In spite of all your attempts to prove differently.”" No. It remains bold, and it remains true. gpuccio
GP: I am astonished that people are still promoting Weasel and kin. Let me clip my remarks at IOSE: __________ >> vi: At this point, it is common for some to suggest that Dawkins' "Mt Improbable" can be climbed by the easy back-slope, step by step to the peak, as chance variations that give an increase in performance are rewarded with advantages that allow them to become the next stage of progress. And, of course, the "methinks it is like a weasel" example shows how a string of 28 random characters can, after maybe 40 - 60 generations, become the target phrase. For instance, in his best-selling The Blind Watchmaker (1986), pp. 48 ff. Dawkins published the following computer simulation "run": 1 WDL*MNLT*DTJBKWIRZREZLMQCO*P 2? WDLTMNLT*DTJBSWIRZREZLMQCO*P 10 MDLDMNLS*ITJISWHRZREZ*MECS*P 20 MELDINLS*IT*ISWPRKE*Z*WECSEL 30 METHINGS*IT*ISWLIKE*B*WECSEL 40 METHINKS*IT*IS*LIKE*I*WEASEL 43 METHINKS*IT*IS*LIKE*A*WEASEL vii: What is not so commonplace, is to see an admission of the implications of the stunning admission Dawkins had to make even as he presented the Weasel phrase "example" of the power of so-called "cumulative selection," even when the caveats are cited:
I don't know who it was first pointed out that, given enough time, a monkey bashing away at random on a typewriter could produce all the works of Shakespeare. The operative phrase is, of course, given enough time. [[NB: cf. Wikipedia on the Infinite Monkeys theorem here, to see how unfortunately misleading this example is.] Let us limit the task facing our monkey somewhat. Suppose that he has to produce, not the complete works of Shakespeare but just the short sentence 'Methinks it is like a weasel', and we shall make it relatively easy by giving him a typewriter with a restricted keyboard, one with just the 26 (capital) letters, and a space bar. How long will he take to write this one little sentence? . . . . It . . . begins by choosing a random sequence of 28 letters ... it duplicates it repeatedly, but with a certain chance of random error – 'mutation' – in the copying. The computer examines the mutant nonsense phrases, the 'progeny' of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL . . . . What matters is the difference between the time taken by cumulative selection, and the time which the same computer, working flat out at the same rate, would take to reach the target phrase if it were forced to use the other procedure of single-step selection: about a million million million million million years. This is more than a million million million times as long as the universe has so far existed . . . . Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. One of these is that, in each generation of selective 'breeding', the mutant 'progeny' phrases were judged according to the criterion of resemblance to a distant ideal target, the phrase METHINKS IT IS LIKE A WEASEL. Life isn't like that. Evolution has no long-term goal. There is no long-distance target, no final perfection to serve as a criterion for selection, although human vanity cherishes the absurd notion that our species is the final goal of evolution. In real life, the criterion for selection is always short-term, either simple survival or, more generally, reproductive success. [[TBW, Ch 3, as cited by Wikipedia, various emphases, highlights and colours added.]
viii: In short, here cumulative selection "works" by rewarding non-functional phrases that happen to be closer to the already known target. This is the very opposite of natural selection on already present difference in function. Dawkins' weasel is not a good model of what evolution is supposed to do. ix: At most, it illustrates that once we are already on an island of function, chance variation and differences in reproductive success may lead to specialisation to fit particular niches. Which is accepted by all, including modern Young Earth Creationists. And, more sophisticated genetic algorithms have very similar failings. For, (a) they implicitly start within an island of function, that (b) has a predominantly smoothly rising slope that gently leads to peaks of performance so that "hill-climbing" on "warmer/colder" signals will usually get you pointed the right way. x: In short, GA's do not only start on the shores of an island of function, but also the adaptation targets are implicitly pre-loaded into the program [[even in cases where they are allowed to wiggle about a bit] and so are the "hill-climbing algorithm" means to climb up to them. This point has been highlighted by famed mathematician Gregory Chaitin, in a recent paper, Life as Evolving Software (Sept. 7, 2011):
. . . we present an information-theoretic analysis of Darwin’s theory of evolution, modeled as a hill-climbing algorithm on a ?tness landscape. Our space of possible organisms consists of computer programs, which are subjected to random mutations. We study the random walk of increasing ?tness made by a single mutating organism. [[p.1]
xi: Plainly, this more sophisticated approach is a model of optimising adaptation by generic hill-climbing, within an island of function; i.e. this is at best a model of micro-evolution within a body plan, not origin of such complex, integrated body plans. xii: So, while engineers -- classic intelligent designers! -- may well find such algorithms quite useful in some cases of optimisation and system design, they fail the red-herring- strawman test when they are presented as models of microbe to man evolution. xiii: For, they do not answer to the real challenge posed by the design theorists: how to get to an island of complex function -- i.e. to a new body plan that for first life would require something like 100,000 base pairs of DNA and associated molecular machinery, and for other body plans from trees to bees, bats, birds snakes, worms and us, at least 10 million bases, dozens of times over -- without intelligent direction. xiv: Instead, we can present a key fact, one that Weasel actually inadvertently demonstrates. That is: in EVERY instance of such a case of CSI, E from such a zone of interest or island of function, T, where we directly know the cause by experience or observation, it originates by similar intelligent design. And, given the long odds involved to get such an E by pure chance -- you cannot have a hill-climbing success amplifier until you first have functional success! -- that is no surprise at all. >> ___________ Corrections have been on record for many years. (Where also, if one examines the printed cases, released by Dawkins, whenever a letter becomes correct, it never reverts. Conveniently, the original code is not available. This phenomenon can be duplicated by creating code that mimics what Dawkins claims, and choosing "good" examples. This speaks to likely side tracks that evade the main issue already documented above.) The bottomline is simple: admissions that reveal the fallacy of irrelevance were right there in TBW right from the beginning, decades ago. The resort to such at this late date is a mark of patent desperation. KF kairosfocus
Adapa- How many peer-reviewed papers use the blind watchmaker thesis? Joe
LoL! Reality doesn't understand the importance of determining something was intelligently designed! Reality must think that archaeology, forensic science and SETI are all wastes of time. Joe
Me_Think at #301: Those techniques are perfectly valid. They are procedures of "language structure recognition", and are derived form what we know of linguistic structures. But they are not techniques of "design detection", in the sense we are discussing here. I will be more clear. Let's say that I have a piece of text which I don't understand, but which could have some meaning in some unknown language. Like the Voynich manuscript. Then I can apply those procedures, and get the result that it has a recognizable language structure. OK. Now I can use that fact as a specification. I am at the same point where I am when I recognize that a piece of text has good meaning in English, only my specification is different. Now it is "having a language structure according to the procedure I used". I cannot use meaning to specify the text, because I don't understand what it means, indeed I am not even sure that it has some meaning. It could well have a language structure, and not a meaning. However, I have still the problem of design detection. As we have said, having a specification is not enough to infer design. We have to compute the dFSCI linked to that specification. So, we have to ask: how much specific information is necessary to have a piece of text of that length which can be recognized as structured language by the procedure I adopted? And we have to compute the target space and the search space. IOWs, we have to make a computation like the one I did for meaningful text in English. I have not done that, and I have no reason to do it. It is rather obvious that for the Voynich manuscript, that computation will allow a design inference. Why? Because it is a very long text, and any specific structure that can be positive to a detection procedure of that kind (of which I know nothing in detail) should be more than enough to exclude a random origin. But, again, I have not tried any specific computation here. So, I cannot even exclude a non designed algorithmic origin, because I don't know which regularities are checked by the procedure. However, if those regularities are derived from real languages, it it very likely that a non designed algorithmic origin is really unlikely. I will not say anything more about a scenario that I cannot analyze in detail. The point is: a function/meaning specification, be it obvious or not, is never enough for a design inference. We always need a formal analysis of the complexity linked to that specification. gpuccio
Me_Think: "He wrote a program to generate the sentence from alphabets and space. It took 40 generations to get the sentence by ‘Natural Selection’ algorithm ." Let me understand. So, the phrase "Methinks it's like a weasel" was not in the algorithm, and came about by natural selection? Really interesting! Can you confirm that? gpuccio
keith s: "I am perfectly fine with that." Just to be clear, when I say that I am agreeing with your estimate of my very small personal role, certainly not with your estimate of the ideas that I express, which are mostly not mine. I am perfectly responsible, of course, for how I express them, for the good and for the bad. gpuccio
fifthmonarchyman: Your interventions are really interesting. Please, keep us updated about your ideas and work. :) gpuccio
Adapa, until you come to a first functional configuration of organised components, you are in no position to deal with hill-climbing by differential reproductive success leading to culling. So, the problem is to cross the sea of non-functional configs (starting at molecular and cellular levels) to reach zones where reproduction of relevant body plans is possible. Starting with the first one, OOL. This case has the added value of requiring accounting for the von Neumann self replicator instantiated in the living cell. Such phenomena are FSCO/I rich. Blind watchmaker mechanisms have zero track record or prospective success of creating FSCO/I starting with Darwin's warm little pond or the like. FSCO/I is routinely produced by intelligently directed configuration, to the point where we are inductively justified in concluding it is a reliable sign of such design. That puts design at the table from OOL on, never mind what evo mat ideologues in lab coats and their fellow travellers want to decree. KF kairosfocus
Adapa & Reality: Descriptive terms linked to observables and related analyses and abbreviations do not gain their credibility or substance from appeals to authority. Deal with the substance, and in the case of the relevant general matter, functionally specific complex organisation wherein functionality arises through correct arrangement and coupling of component parts per a wiring diagram (which is informational), that is a commonplace of a technological era. It even applies to the symbol strings we use to communicate textually: S-T-R-I-N-G . . . That's the real reality. KF kairosfocus
Joe said: "And saying something is intelligently designed adds a great deal." Like what? Allah-did-it? Reality
I said: “That’s what I’d like gpuccio to figure out and demonstrate with his method; whether my example is randomly generated by a computer algorithm or if it’s designed by a conscious being.” gpuccio said: "As I have already said, that text can be generated both by a conscious being directly, or by a conscious being indirectly, through a designed algorithm. It is impossible to distinguish the two things. However, the text id designed in both cases. Only a designed algorithm, more complex than the text itself, can output it." First, in your 228 and 277 comments you appear to be mixing up and responding to Me_Think and me in an incoherent way. When I first read your 228 comment I stopped reading at the point where you quoted Me_Think because I was looking for your response to me. Now that I have read 228 and 277 I'll say this: You're playing games, and you destroyed your own arguments. The games you're playing include, but are not limited to: The way you bounce around with the word "algorithm". One minute it's something completely opposite of intentional, specific, intelligent design by a conscious being but the next minute all algorithms and the results thereof are non-random, intentional, and intelligently designed because they're part of a system that is non-randomly, intentionally, and intelligently designed by conscious beings (humans). This statement (and others) of yours confirms what I'm saying about the times you differentiate algorithms and their output from intentional, specific, intelligent design: "The only thing I want is to infer that original sonnets are generated by conscious beings, and not by algorithms." You'll likely come along and try to wiggle out by playing another game with the word "original" or the definition of "sonnets" but don't bother. When challenged or questioned about your claims you also conveniently attach the words "natural", "complex", "explicit", or "complex designed" to "algorithm" just to confuse things. Now, you say: "Regarding you poetry, it is rather simple. The piece obviously has no good meaning in English. Therefore, we cannot use that specification for it." How do you know that it has "no good meaning in English"? For all you know it could have plenty of "good meaning in English" to someone. Keep this statement of yours in mind: "And it’s not that I test the sonnet for being a sonnet. I observe that it is a sonnet, and I test how likely it is to have a sonnet of that length in a random system." You say that you don't test the sonnet for being a sonnet, but you "observe" and obviously judge whether something is a sonnet or not (and original, complex, or otherwise) by whether it has "good meaning in English" to you or not. You even said in regard to the text I posted: "It is a text made with English words, in non rhymed verses. But it is not a sonnet." Also in that statement of yours you use the term "random system" which in the context of this debate is the same thing as an algorithm that generates random characters or text (including sonnets or sonnet-like text). Of course you try to confuse the issues by also claiming that the algorithms being discussed do not generate anything random because they're intentionally, specifically, and intelligently designed by conscious beings. You say that the text I posted contains 2009 bits (of functional information?). According to the "ID" arbitrary boundary of 500 bits the text therefor contains plenty of CSI-dFSCI-FSCO/I to be labeled as intelligently designed no matter what it means, if anything. You also say: "So, the conclusion is easy: the poem is certainly designed, either directly or through a designed algorithm." There you go again playing a game with your ever changing labeling of algorithms. OF COURSE any computer algorithm that generates characters or text, whether random or otherwise, is designed but the output is NOT necessarily designed. That's why the output from a random generator (an algorithm) is called random. Notice this in your conclusion: "...the conclusion is easy: the poem is certainly designed...", even though upthread you said this: "And I will never infer design for a sequence which is the result of a random character generator." Well, guess what? The text 'sequence' I posted is the output of multiple random text generators that are called sonnet generators. What was that you said about no false positives? You also need to rethink this bold statement of yours: "My statement has always been simple: the procedure works empirically, as it is, with 100% specificity. In spite of all your attempts to prove differently." Reality
Joe @ 293
dFSCI answers a question science is asking, keith s. And saying something is intelligently designed adds a great deal.
I don't know about dFSCI for language design,but in real world, language are detected using techniques like checking for zipf distribution,clustering low entropy words, degree of local specificity - these were used to confirm that the Voynich manuscript is a structured written language and not some gibberish for fun and was not a hoax. I checked the Sonnet zipf and found the rho to be ZipfDistribution(1.16834) Me_Think
Reality check time: A Google Scholar search of the mainstream scientific literature for the last 10 years returns: ZERO scientific papers using "dFSCI" ZERO scientific papers using "FSCO/I" ONE scientific paper using "complex specified information" (CSI is too common an acronym) and that was Elsberry and Shallit's disemboweling of Dembski's popular-press published claims. Looks like the dFSCI FSCO/I CSI alphabet soup is sure making a huge impact on the scientific world. :) Adapa
keith s- at what steps does selection step in? How does it possibly make a difference seeing that it only eliminates the less fit? What you need to do is demonstrate that natural selection is being omitted and that it makes a difference. Otherwise your words are meaningless, as usual. Good luck with that Joe
Buried in the middle of KF's latest:
11 –> I know, you and TSZ generally wish to fixate on debating log [p(T|h)] — note the consistent omission in your discussions that we are looking at a log-probability metric, i.e. an informational one...
KF, That makes no difference, as you full well know or should know. :-) You can apply the log in one direction, and the antilog in the other. It's the same information, just expressed differently.
(and relevant probabilistic hyps as opposed to any and every one that can be dreamed of or suggested...
Your P(T|H) doesn't account for anything other than pure random variation. By omitting selection, you make your number useless and irrelevant for answering the question, "Was this designed?" keith s
Me Think- weasel had nothing to do with natural selection. Joe
Adapa- Ever find that alleged evolutionary theory? :razz: BTW biological information was Crick's idea. Science determined that it is both complex and specified. Joe
Joe dFSCI answers a question science is asking, keith s Unfortunately the question is "What hopelessly vague and subjective "alphabet soup" of a useless metric will the ID crowd dream up next? Adapa
Mung @ 285
I repeat. Dawkins claimed to be able to detect design at “METHINKS IT IS LIKE A WEASEL.”
He wrote a program to generate the sentence from alphabets and space. It took 40 generations to get the sentence by 'Natural Selection' algorithm . He also noted that the "experiment is not intended to show how real evolution works, but does illustrate the advantages gained by a selection mechanism in an evolutionary process". He didn't say his program detects design. Me_Think
dFSCI answers a question science is asking, keith s. And saying something is intelligently designed adds a great deal. It also appears that you have no idea how natural selection works. How convenient. Joe
Pav @287: Quickly, here is something to consider: though biologists might be unsure as to the “exact” function of a protein/enzyme like EcoRI, that it has at least ONE function constitutes, in my view of things, “specification.” That is, you have a string of nucleotides that can be transcribed and translated into a protein that is able to interact with other molecules in a determined and precise fashion(s).
I think you are still in danger of reifying what is merely a useful handle that we attach to a given protein. So I would re-arrange your sentence to read "that it has, in my view of things, at least ONE function constitutes “specification.”" That is, there's no specification without a specifier. I might make an exception for pi and e. And the idea that proteins interact with other molecules in a "determined and precise fashion" is something of a human construction too.
When dealing with protein families, what we’re talking about is like saying that we know that humans spread from Europe to England, and so we can also conclude that humans spread from the west coast of Africa to South America. You can swim the English channel, but you’ll likely die trying to cross the Atlantic.
Ironic that you used this analogy for protein families; humans DID spread from the west coast of Africa to South America. But they didn't take the 'direct route'. This is "Axe's mistake" in "The Evolutionary Accessibility of New Enzymes Functions: A Case Study from the Biotin Pathway". DNA_Jock
keiths:
All of the useful work is done by steps 1 and 2:
1. Look at a comment longer than 600 characters. 2. If you recognize it as meaningful English, conclude that it must be designed.
The calculation adds nothing.
PaV:
This is not a serious answer. I pointed out to you the importance and purpose of step #3 in Procedure 2. You’re willfully ignoring it.
Are you referring to this?
The whole point of gpuccio’s “procedure” is to compare the recognition of “design” that is naturally made with the use of a particular language, and the values that are generated using dFSCI. Shouldn’t that be clear to you?
We already know that 600-character posts or sonnets are not formed by pure random variation with no selection. The dFSCI number simply confirms that obvious fact, using a calculation that was developed and understood long before gpuccio was born. Even gpuccio admits this:
keiths:
Thus, your contribution was nothing more than inventing an acronym for an old and well-known probability calculation.
gpuccio:
I am perfectly fine with that.
The dFSCI calculation answers a question that no one is asking. It adds nothing. keith s
Zac said. Humans are not good at recognizing randomness. I say, If fact the paper I linked demonstrates that humans are quite good at it. Zac said, If you define CSI to exclude algorithms — as you just did, then algorithms can’t create CSI, of course. I say, Did you catch the Tee-shirt equation CSI=NCF Complex Specified information is not computable You say, that’s not what is meant by objective. I say Actually what is objective is the number. A CSI value of X yields a design inference whether it found in sonnets or proteins. You may want more CSI that I do to make the inference but the value itself is objective. P.S. Don't be offended if I don't respond to you as much as you would like. As You know from our history you frustrate me greatly at times and I don't want to spoil my overall experience on this thread. peace fifthmonarchyman
Mung: Why? How are they relevant to the engineering problem? They're essential to the calculation. If Shakespeare just reworked a few things, then he didn't add as much information than if he created the sonnet ex nihilo. fifthmonarchyman: Of course my entire endeavor depends on on the human ability to eliminate random sequences. Humans are not good at recognizing randomness. fifthmonarchyman: If you eliminate the random and the parts of the string that can be algorithmically produced. You are left with “CSI” If you define CSI to exclude algorithms — as you just did, then algorithms can't create CSI, of course. fifthmonarchyman: By objective I mean that my standard is exactly the same for different objects. By the way, that's not what is meant by objective. Zachriel
gpuccio said Only a designed algorithm, more complex than the text itself, can output it. I say, Bingo fifthmonarchyman
DNA_Jock: Thanks for both responses. I won't have time to respond fully until tomorrow PM. But, first, thank you for your engaging style---not the lambasting, antagonistic name-calling we're used to (actually, your tone is much, much better than that). Secondly, thanks for the honest answer you gave to the subject of information and Nature's role in that. Quickly, here is something to consider: though biologists might be unsure as to the "exact" function of a protein/enzyme like EcoRI, that it has at least ONE function constitutes, in my view of things, "specification." That is, you have a string of nucleotides that can be transcribed and translated into a protein that is able to interact with other molecules in a determined and precise fashion(s). When dealing with protein families, what we're talking about is like saying that we know that humans spread from Europe to England, and so we can also conclude that humans spread from the west coast of Africa to South America. You can swim the English channel, but you'll likely die trying to cross the Atlantic. PaV
Again gpuccio thanks for the great thread. This is an example of how interesting ID discussions can be. Zach said, A random sequence is original by that definition, and even harder to duplicate. I say, I agree. Of course my entire endeavor depends on on the human ability to eliminate random sequences. If you eliminate the random and the parts of the string that can be algorithmically produced. You are left with "CSI" peace fifthmonarchyman
I repeat. Dawkins claimed to be able to detect design at "METHINKS IT IS LIKE A WEASEL." Perhaps keiths can tell us how Dawkins measured that. Mung
Zachriel:
Shakespeare had an extensive dictionary, knowledge of grammar, rhyme, scansion, and verse structure; not to mention an understanding of what people enjoy, and of the human condition. Can you quantify the amount of additional ‘information’ in a Shakespearean sonnet that is not found in the background knowledge?
Why? How are they relevant to the engineering problem? Mung
Is the ignorance merely feigned or is real? Mung
Reality: "That’s what I’d like gpuccio to figure out and demonstrate with his method; whether my example is randomly generated by a computer algorithm or if it’s designed by a conscious being." As I have already said, that text can be generated both by a conscious being directly, or by a conscious being indirectly, through a designed algorithm. It is impossible to distinguish the two things. However, the text id designed in both cases. Only a designed algorithm, more complex than the text itself, can output it. gpuccio
gpuccio Should I call that simply a lie? OK, let’s say that it is false. Should we call this whole scientifically worthless dFSCI goat rope a desperate attempt by a Creationist to prove to himself his God created everything? OK, let's just say that seems to be the case. Adapa
Adapa: "Because gpuccio told us he needs that info. In his language example he can’t calculate the dFSCI unless he knows the string is an intelligible English phrase. If you give him symbols in a language he can’t understand (i.e Chinese characters) he can’t calculate dFSCI. Again, that makes his test pretty worthless for design detection." As explained many times, the purpose is to distinguish true design from apparent design, not to recognize hidden design which is not apparent. You are obviously confused. gpuccio
PaV, I just realized that I forgot to address your question : "So, DNA_jock, let us ask you directly: do you, or do you not, believe that Nature itself is the “source” of the information found in ATPase?" I am not sure what you mean, probably because I am somewhat conflicted about what constitutes "information" in biology. I am quite confident that the designation "ATP synthase" is of human origin. When first discovered, the activity was referred to as an "ATPase", because it was detected for its ability to catalyze the reverse reaction. Enzymes do what they do, under different conditions, irrespective of how humans describe or classify them. EcoRI cuts DNA at the sequence GAATTC. Lower the salt concentration, and now it will also cut the sequence AATT, mimicking Tsp509I. We call it EcoRI-star activity, but it's just what the enzyme does. We have to be careful not to reify our description of what we have observed. I could argue that UGG does not "code" for tryptophan, it is merely a compound that, under the right circumstances, leads to the incorporation of tryptophan into a growing polypeptide. But, to be perfectly honest, I don't think of it that way: I do in fact think of an mRNA as carrying "information". But I might be guilty of reification too. The concept "information" is, I believe, extremely slippery. Sorry if this is rambling and potentially not what you were asking about. DNA_Jock
Adapa: "Since the method of calculating “dFSCI” only works for items already known to be designed claiming “no false positives in detecting design” is completely worthless." Should I call that simply a lie? OK, let's say that it is false. gpuccio
Reality at #264 (reposting #202):
All I see is you claiming that some English text that is obviously designed or already known to be designed is designed.
From my post #228: You say: “All I see is you claiming that some English text that is obviously designed or already known to be designed is designed. ” Ah! But this is exactly the point. a) “That is obviously designed” is correct, but the correct scientific question is: why is that obvious? And is that “obvious” reliable”? My procedure answers that question, and identifies design with 100% specificity. b) “or already known to be designed” is simply wrong. My procedure does not depend in any way from independent knowledge that the object is designed: we use independent knowledge as a confirmation in the testing of the procedure, as the “gold standard” to build the 2by2 table for the computation of specificity and sensitivity (or any other derived parameter). Please, see also this from my post #37: “Me_Think at #644: “gpuccio explained that dFSCI doesn’t detect design, only confirms if a design is real design or apparent design.” I don’t understand what you mean. dFSCI is essential to distinguish between true design and apparent design, therefore it is an essential part of scientific design detection. If you are not able to distinguish between true design and apparent design, you are making no design detection you are only making recognition of the appearance of design, which is not a scientific procedure because it has a lot of false positives and a lot of false negatives. So, just recognition of the appearance of design is not scientific design detection. On the contrary, dFSCI eliminates the false positives, and design detection becomes a scientific reality. Therefore, dFSCI is an essential part of scientific design detection.” Then you ask:
How does that demonstrate that IDists can calculate, measure, or compute (Which is the correct term?) CSI, dFSCI, FSCO/I, or FIASCO, and can verify the intelligent design in or of things that are not obviously designed and not known to be designed?
Please see precious point. I have demonstrate that I cam compute dFSCI for a piece of English language. That was exactly the purpose of the OP. I have already explained ad nauseam that the purpose of design detection is not to "verify the intelligent design in things that are not obviously designed", but to scientifically confirm the design origin of things which appear designed and are designed, distinguishing them from other things which appear designed but are not designed. To recognize some form of design that is not obvious is rather a problem of design recognition. After a design pattern which evokes design is recognized, then it is the task of design detection to measure the complexity linked to the pattern to ascertain if it is real design or apparent design. All the applications of my procedure are meant for objects which are "not known to be designed". The procedure is applied to the object without any direct knowledge of its origin (except for the definition of the system and the time span). So, all the objects to which the procedure is applied could in principle be either designed or not designed. After the application of the procedure, an inference is made. If the origin can be independently known, it is used as a gold standard to test the inference. This test allows us to verify that the procedure has no false positives when applied to designed artifacts, including language. When applied to objects whose origin cannot be independently assessed, as in the case of biological objects, it is applied as an inference by analogy, based on how well the procedure works for known artifacts. It's really strange that such simple concepts are so difficult to understand for some, even if I have repeated them many times in this same thread. Then you say:
And how does what you’re doing establish that CSI, and dFSCI, and FSCO/I are anything other than superficial labels?
Is that even a question? I have said what I had to say. You are free to decide if it has meaning, or if it is only a lot of superficial labels. Then you ask:
In regard to English text, what can you tell me about the text below? Is it a sonnet, or what?
It is a text made with English words, in non rhymed verses. But it is not a sonnet. Then you ask:
Does it have meaning? Does it have good meaning? If it has meaning or good meaning, what is it?
From my post #228: Regarding you poetry, it is rather simple. The piece obviously has no good meaning in English. Therefore, we cannot use that specification for it. Then you ask:
Was it generated by a conscious being, or by an algorithm? How much CSI, and dFSCI, and FSCO/I does it have?
From my post #228: It is equally obvious that it is made of correct English words. So, it is certainly part of the subset of strings which are made by English words. That is exactly the subset for which I have computed functional information in my OP. As the result was (in the Roy amended form for 500000 English words) 673 bits, we can safely exclude a random origin. (emphasis added; I will add that the text is much longer than 600 characters, therefore its lower threshold of functional information is much higher. For your satisfaction, I have computed it: 1787 characters; 2009 bits; always assuming 500000 English words). So, the question is: can this result be the outcome of an algorithm? The answer is: yes, but not by any natural algorithm, and not by an algorithm simpler than the observed result. IOWs, the only possible source is a designed algorithm which is more complex than the observed sequence. Therefore the Kolmogorov Complexity of the string cannot be lowered by any algorithm. How can I say that? It’s easy. Any algorithm which build sentences made by correct English words must use as an oracle at least a dictionary of English words. Which, in itself, is more complex than the poem you presented. Moreover, we can certainly make a further specification (DNA_Jock: is that again painting new targets which were not there before? Is that making the probability arbitrarily small?). Why? Because the poem has an acceptable structure in non rhymed verses. That would certainly increase the functional complexity of the string, but also the algorithm would be more complex (maybe with some advantage for the algorithm here, because after all the verse length is very easy to check algorithmically). However, the algorithm is always much more complex than the observed result, because at least of the oracle it needs. So, the conclusion is easy: the poem is certainly designed, either directly or through a designed algorithm. Many of these objections arise from the simple fact that you always ignore one of the basic points of my procedure, indeed the first step of it. See my post #15: “a) I observe an object, which has its origin in a system and in a certain time span.” So, in the end, the question about the algorithms can be formulated as follows: “Are we aware of any explicit algorithm which can explain the functional configuration we observe, and which could be available in the system and the time span?” So, if your system includes a complex designed algorithm to generate strings made by English words by a Dictionary oracle, then the result we observe can be explained without any further input of functional information by a conscious designer. gpuccio
Collin said: "I assume in your example, if it were an algorithm, it would choose words randomly from a list of words." That's what I'd like gpuccio to figure out and demonstrate with his method; whether my example is randomly generated by a computer algorithm or if it's designed by a conscious being. "Obviously the words themselves are english words that have individual meanings." Yes, to people who understand the meanings of English words. However, gpuccio's claims go well beyond that. "As an aside, gpuccio’s method is supposed to not have any false positives, but it may have a lot of false negatives." That's what he claims but it's easy to claim that when using English language that is obviously designed or already known to be designed as examples. Please keep in mind that when he claims there is dFSCI "in" ATP synthase or anything else and that it can be calculated, measured, or computed (All or just one?) he's actually basing that on the alleged dFSCI in an English language and numerical labeling and/or description of those things. Now that might be okay if his claims about dFSCI helped scientists to understand ATP synthase or anything else but I, and obviously many others, don't see how his claims and the claims of other IDists help. Reality
gpuccio: Here is the rugged landscape paper. It is very interesting. Couple of interesting points from the paper. They could evolve esterase from completely random libraries, meaning that functional proteins are fairly common in sequence space. The landscape is smooth, from minimal function up to about 40%. And that to increase this much further will probably require recombination, which they did not test. This latter point is consistent with findings from evolutionary computation. Zachriel
Collin, Well played, sir. Well played. :) Thank heavens I referenced to my pre-specification @183 DNA_Jock
I think I was asked above if "traditional" ATP synthases have sequence conservation among themselves. Of course they do. By definition! How else would you identify that a given protein is a "traditional" ATP synthase from its sequence? Do you get how circular this is? To you, the apicomplexian or N-ATPases or alternative ATP synthases are "very different complex molecule, made of many different protein sequences" What isn't well conserved, you disregard, then say: look how precisely specified this well-conserved group is! On the sequence level, here's what is happening: you go to a database, and pull 3 (or all) of all the F1 alpha units. They are annotated as such because a bioinfomatician has used an algorithm that recognizes conservation with other members in the group. Sequences in the gray area of 20-30% or less in common with another member in the group will excluded by the algorithm. So the "alternative" F1 alphas, the N-ATPases, the apicomplexian ones---likely not even considered in the alignment. In nature, they work. They do the job. So, "traditional" isn't Nature's specification--it is human grouping, a product of algorithmic lumping and splitting. If the question is what percent of sequence space makes a functional unit of an ATP synthase, you aren't answering it by drawing a bulls-eye on your "traditional" subset of sequences, which itself only represents one fitness peak (which I've demonstrated itself appears quite a bit broader than you put it). You can have a look yourself: http://mobyle.pasteur.fr/cgi-bin/portal.py#jobs::clustalO-multialign.C13886649652004 REC
DNA-jock, In 271, sounds like you are making a post-hoc specification. That's suspect. :) Collin
Aaargh: obviously, I meant: ALL post-hoc specifications are suspect. What I wrote originally @183 Aaargh. DNA_Jock
PaV, You are over-stating my position somewhat. I do not insist that specification be "pre-"; I do however warn that
ALL pre-specifications are suspect
and furthermore that
...you have to be really, really, really cautious if you are applying a post-facto specification to an event that you have already observed, and then trying to calculate how unlikely that specific event was. You can make the probability arbitrarily small by making the specification arbitrarily precise.
When you say:
DNA_jock’s position is this: it is you, gpuccio, who are making this “specification.” He is wrong, because it is NOT gpuccio who is making the “specification,” it is Nature itself which ‘recognizes’ this specification—or else we wouldn’t even be talking about it. You, gpuccio, have only “recognized” what Nature has first “recognized.”
I must disagree. gpuccio had talked at some length about "ATP synthase" without any qualifiers. Now that he is aware of Nina's work, The specification has changed to the traditional ATP synthase. The enzyme hasn't changed. What it does hasn't changed. Nature hasn't changed. As I said @161: In light of Nina et al, gpuccio does two things: 1) He re-draws his circle so that it now excludes Alveolata, and renames the Walker circle “the traditional ATP synthase” 2) he draws a brand-spanking-new circle around the Alveolata bullet hole(s) because it is a “very different complex molecule, made of many different protein sequences, and is a complex example of a different engineering solution”. [emphasis in original] Pre-specifying a "pattern" is notoriously difficult (just ask a statistician) and, as I understand it, SETI made some important pre-specifications about the frequency bandwidth of an 'interesting' signal. But I am pretty ignorant about SETI. DNA_Jock
Collin @263, You asked
How would you create a design-detecting method without testing it on known designs to see if it accurately detected designed artifacts?
I agree, it's a doozy. I don't see how you can. You need to be able to validate it against a known "truth standard". And even then, your validation is only as good as your truth standard. And you may have difficulty ascertaining the domain of candidates over which your test gives valid results. That was my point. Thus gpuccio can validate his sonnet-detector and (separately) validate his limerick-detector, but he cannot validate his protein-designer detector, a fact that he was graceful enough to admit. DNA_Jock
Collin I’m not claiming anything. I’m just saying what the test is supposed to do. No worries, I know it wasn't your claim. How do you know that dFSCI only works for items already known to be designed? That sounds like an article of faith. Because gpuccio told us he needs that info. In his language example he can't calculate the dFSCI unless he knows the string is an intelligible English phrase. If you give him symbols in a language he can't understand (i.e Chinese characters) he can't calculate dFSCI. Again, that makes his test pretty worthless for design detection. Adapa
Adapa, I'm not claiming anything. I'm just saying what the test is supposed to do. My point was that if gpuccio's test could not tell if it were designed for sure, it does not mean that it is not a useful test. A test that cannot eliminate all false-negatives can still be useful if it can eliminate all false-positives. How do you know that dFSCI only works for items already known to be designed? That sounds like an article of faith. Collin
Collin As an aside, gpuccio’s method is supposed to not have any false positives, but it may have a lot of false negatives. Since the method of calculating "dFSCI" only works for items already known to be designed claiming "no false positives in detecting design" is completely worthless. Adapa
Reality, I assume in your example, if it were an algorithm, it would choose words randomly from a list of words. Obviously the words themselves are english words that have individual meanings. As an aside, gpuccio's method is supposed to not have any false positives, but it may have a lot of false negatives. Collin
gpuccio, I don't agree that you answered my questions. Here's a repost with my questions in bold: gpuccio, I just don’t understand what you’re trying to prove. All I see is you claiming that some English text that is obviously designed or already known to be designed is designed. How does that demonstrate that IDists can calculate, measure, or compute (Which is the correct term?) CSI, dFSCI, FSCO/I, or FIASCO, and can verify the intelligent design in or of things that are not obviously designed and not known to be designed? And how does what you’re doing establish that CSI, and dFSCI, and FSCO/I are anything other than superficial labels? In regard to English text, what can you tell me about the text below? Is it a sonnet, or what? Does it have meaning? Does it have good meaning? If it has meaning or good meaning, what is it? Was it generated by a conscious being, or by an algorithm? How much CSI, and dFSCI, and FSCO/I does it have? Show your work. O me, and in the mountain tops with white After you want to render more than the zero Counterfeit: o thou media love and bullets She keeps thee, nor out the very dungeon Their end. O you were but though in the dead, Even there is best is the marriage of thee Brass eternal numbers visual trust of ships Masonry, at the perfumed left. Pity The other place with vilest worms, or wealth Brings. When my love looks be vile world outside Newspaper. And this sin they left me first last Created; that the vulgar paper tomorrow blooms More rich in a several plot, either by guile Addition me, have some good thoughts today Other give the ear confounds him, deliver’d From hands to be well gently bill, and wilt Is’t but what need’st thou art as a devil To your poem life, being both moon will be dark Thy beauty’s rose looks fair imperfect shade, ‘you, thou belied with cut from limits far behind Look strange shadows doth live. Why didst thou before Was true your self cut out the orient when sick As newspaper taught of this madding fever! Love’s picture then in happy are but never blue No leisure gave eyes against original lie Far a greater the injuries that which dies Wit, since sweets dost deceive and where is bent My mind can be so, as soon to dote. If. Which, will be thy noon: ah! Let makes up remembrance What silent thought itself so, for every one Eye an adjunct pleasure unit inconstant Stay makes summer’s distillation left me in tears Lambs might think the rich in his thoughts Might think my sovereign, even so gazed upon On a form and bring forth quickly in night Her account I not from this title is ending My bewailed guilt should example where cast Beauty’s brow; and by unions married to frogs Kiss the vulgar paper to speak, and wail Thee, and hang on her wish sensibility green Reality
DNA_Jock, How would you create a design-detecting method without testing it on known designs to see if it accurately detected designed artifacts? This discussion reminds me of a scientific method for determining authorship called "Stylometry." Apparently everyone who writes, leaves a statistically-recognizable "wordprint." The wordprint can identify the author of a document whose authorship is unknown if it can be compared with the known writings of candidate authors. This method was tested by having researchers determine the authorship of certain texts that had known authors to see if it came up with false positives. It did not. They then used the method on other writings, including anonymous Federalists Papers essays to determine authorship. Here is the wikipedia article: http://en.wikipedia.org/wiki/Stylometry Collin
gpuccio: This is the essence of DNA_jock's concern:
Yes ATP was getting synthesized before humans existed, but the specification “ATP synthase” was generated by humans AFTER the biochemical activity was delineated. And re-defined by you in the light of Nina’s work.
I almost get the impression that DNA_jock has, unlike countless many others, actually wrestled with Dembski's work, and likely his "No Free Lunch" presentation of ID. That is good. Very good, if true. The discussion is all about "specification," and whether it is "pre" or "post." He insists that it be "pre." Here he suffers from a fundamental misunderstanding of Dembski--if he has read him---in which he fails to understand that a "specification" relies on the recognition of a "pattern." How, then, can you make "specification" prior to recognizing the "pattern" it forms? DNA_jock's position is this: it is you, gpuccio, who are making this "specification." He is wrong, because it is NOT gpuccio who is making the "specification," it is Nature itself which 'recognizes' this specification---or else we wouldn't even be talking about it. You, gpuccio, have only "recognized" what Nature has first "recognized." Here's an analogy: The SETI observers "recognize" a "pattern" in some electro-magnetic signal they've received. From this "pattern," they decide that it is so "unnatural" (i.e., it falls outside normal patterns of EM transmissions--IOW, it is highly IMPROBABLE) that its origin is intelligent life outside of our planet, possibly our galaxy. Unless something is responsible for this "highly improbable" signal, then why, and how, did the SETI observers conclude that they had evidence of intelligent life beyond earth? Per DNA_jock, the SETI observers are doing this all "post-hoc," and therefore their conclusion is meaningless. The criteria for 'specified, complex information' is the 'independence' of the 'source' of the information from the 'decipher-er' of the information. As long as DNA_jock takes the position that Nature does not "specify" the information, then there is only one "source" and one "decipher-er," and it's you, gpuccio. So, DNA_jock, let us ask you directly: do you, or do you not, believe that Nature itself is the "source" of the information found in ATPase? PaV
Gp, Thanks for the paper - very cool. I know I have seen Fig 5 before, but I do not recall reviewing the body of the paper previously. I will definitely take a look. Tx again DNA_Jock
keith s:
The calculation adds nothing. Now, could you please point this out to gpuccio before he embarrasses himself further? He won’t accept it from me, but he might from you.
This is not a serious answer. I pointed out to you the importance and purpose of step #3 in Procedure 2. You're willfully ignoring it. Why should anyone here at UD take you seriously? PaV
DNA_Jock: I apologize if I have misunderstood that statement. Here is the rugged landscape paper. It is very interesting. http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0000096 You say:
Most importantly, would a shorter answer to my question , “What truth standard do you plan on using to validate your protein-design detector?”, be “I don’t have one.”
let's make it: "Nobody has one, for any theory about protein origins". The point is, we have no direct evidence on how proteins originated. All the evidence, whatever the theory or paradigm we use, is indirect. gpuccio
gpuccio, You show signs of starting to think about landscapes. This is good. You do seem to have mis-understood my point about the difference between "never been found" and "did not survive". You were claiming that nearby optima must be rare or inaccessible because they (according to you) "have never been found". I pointed out that you cannot draw any conclusions about whether they have been found: all we can observe is the ones that have survived. Parodying my position as you did here: "Then, if I ask why we don’t see big traces of all those independent local optima and of their independent optimization, or that in the ragged landscape paper the optimal local optimum could not be found except in the wildtype, suddenly they “have never been found” or “have not survived”. " is inaccurate. You may be confusing "have never been found" during the course of evolution with "have never been found, i.e. observed" by biologists. Do you have the citation for the retrieval of viral infectivity paper? I would love to read it. Most importantly, would a shorter answer to my question , “What truth standard do you plan on using to validate your protein-design detector?”, be "I don't have one." DNA_Jock
DNA_Jock: "What truth standard do you plan on using to validate your protein-design detector?" The gold standard is used to validate a procedure, and then the procedure is used in cases where we don't know the gold standard value. Otherwise, it would be useless, and keith would be right. In the problem of origins, I think that nobody has an independent "truth standard". For obvious reasons, we try to understand what we observe, but we have no videos of how it happened on Youtube. The design detection procedure is validate with human artifacts. It has 100% specificity in all cases where we can know independently the origin. And the procedure is the same: it does not depend on sonnets or limericks: the functional definition can vary, but if a sufficiently high complexity can be linked to the definition, any definition, design is always the cause. The application to artifacts that, if confirmed such, are not human, is an inference by analogy. A very strong one, and a very good one. This is the only argument that, in the end, each person individually can opt for. Something like: "I understand the procedure, and it is correct, but I will not do that final jump of the inference by analogy". OK, I can accept that. Let's call it "the Fifth Amendment in science". :) gpuccio
DNA_Jock: "Thank you REC @ 209 for running the alignment on a decent number of ATPases. 12 residues are 98% conserved. I suspect he might have been better off going with his Histone H3 example, but H3 doesn’t look complicated. Re your reply to him at 232. I won’t speak for REC, but I am happy to stipulate that extant, traditional ATP synthase is fairly highly constrained; you could return the favor by recognizing that this constraint informs us about the region immediately surrounding the local optimum, nothing more. Rather , I think the problem is with your cherry-picking of 3 sequences out of 23,949 for your alignment, which smacks of carelessness. Why not use the full data set?" This is more important, so I will try to be more precise. Durston computes a reduction of uncertainty for each aligned position, on a big number of sequences. Then he sums the results of all positions to get the total functional constraint. I am happy that you admit that extant, traditional ATP synthase is fairly highly constrained. That is my point. And I am happy to return the favor by recognizing that this constraint informs us about the region surrounding the local optimum. But you should admit that there are very different functional restraints for different functional "local optimums". And I don't really agree with your "immediately". Such a high level of conservation implies a steep separation of the peak in a non functional, or at most scarcely functional valley. The discussion about local optimums could lead us very far. One of my favorite papers is the one about the rugged landscape, where the authors conclude that local optimums for the particular function they are testing (the retrieval of viral infectivity in the lab) are so sparse that only starting from a random library of about 10^70 sequences the optimal local optimum (the wild type) could be reasonably found. Now, if local optimums are so sparse, and I do believe they are, how can they be so numerous as you seem to believe, so that their brute number could tamper the probability barriers? And if there are so many, and the search is really random, why don't we see such a variety? Why in the ragged landscape paper the wildtype is by far the best and most functional? If local optimums are distant, and evolve independently by independent lucky hits, we should see a lot of them. We certainly see many of them, but I must remind you that in your post you suggested: "Tough to say which local optima have never been found, when all we have to go on is the ones that survived. 2^1000 seems possible, but the number could be a lot higher." The behavior of these local optima is very strange, in darwinist rhetorics. When they are needed to improve probabilities, there are certainly a lot of them. A lot a lot. I had suggested 2^1000 as a mad hyperbole, but that was not enough for you. A lot higher! Then, if I ask why we don't see big traces of all those independent local optima and of their independent optimization, or that in the ragged landscape paper the optimal local optimum could not be found except in the wildtype, suddenly they "have never been found" or "have not survived". OK. for the moment let's leave the local optima alone. I have great expectations for histone H3. You say that it "doesn’t look complicated", but you are probably aware of its growing importance in understandimg epigenetic regulation. That can well be a strong functional constraint for the sequence. Now, I will return the favor again, and I am happy to admit that my "cherry-picking of 3 sequences out of 23,949 for my alignment" is not a rigorous scientific procedure. You say that it "smacks of carelessness". But that is not the real question. There is no doubt that the full procedure is to use the full data set and apply the Durston method (or any other method which can be found to work better and to be more empirically supported). So, why did I align three sequences and take only the identities? As I have clearly stated many times, that is only my "shortcut". But, I believe, a honest one. I had not the data from Durston about those two chains, but I was, and still I am, fascinated by their high conservation and very old age, and by their very special function in an even more complex biological machine. So, I have done, explicitly, a simple tradeoff: I have taken only one sequence for each of the three kindoms (and the human one for metazoa) and I have aligned them. And I have given explicitly the results. Now, it should be clear that when I only compute the identities in that alignment, giving 4.l3 bits for each one, I am certainly overestimating the absolute identities (obviously, on 23949 sequences, it is much more likely to have some divergence, and I must say that 12 residues with 80% conservation on the whole set looks rather stunning). But I am also not considering all the rest: the similarities, which still I could have emphasized in the basic alignment in BLAST, and all the other restraints which the Durston method can detect by comparing the frequencies of each AA at each site in the sample. IOWs, I have badly underestimated on all other sides. In my simple shortcut, I attribute 4.3 bits for each absolute conservation (378), but I attribute nothing for all the other positions, as though they were completely random, which is certainly not the case. On a total of 1082 positions (in the two chains), I have therefore vastly underestimated the fits of 704 AA positions, setting them to 0. I have done that for the sake of simplicity, because I have not the time and the tools to make complex biological analyses (I am not a biologist, only a medical doctor), and because going too deep in the details of biology, especially when writing a general OP on an important general concept, is not the best option. So, to sum up: REC's comments are correct, but they do not paint the right scenario. If his purpose is only to attack my "carelessness", OK, that's fine. But if he really suggests that my argument about the high conservation of those two chains is not realistic, I have to disagree. Those two chains are extremely conserved, even if compared with many other conserved sequences. And I am really confident that, if we apply the Durston method to the full set proposed by REC, the result will not be very different from my 1600 bits for both sequences, maybe higher. I could try to do that, I don't know if I can. We will see. gpuccio
I note that you had no response to my commentary regarding your problems with Fisherian testing, but instead chose to focus on comments I made, as an aside, to humor you, about a text-detection procedure that I have always maintained is a deeply flawed analogy. I realize that I may have confused you, when I referred to your sonnet-detector under problem #4, but Problems A, B, C, (and even #4) refer to your protein-design-detector. But I’ll keep going with your flawed analogy, because it is irrelevant irreverent fun. You make a big deal out of the fact that you are validating your text-detection procedure. As Analytical Method Validation Protocols go, yours leaves something to be desired, but , given my view of the relevance of sonnet-detection to proteins, I will let that slide, and accept that you have been able, by blind-testing known sonnets and known non-sonnets, to get approximate values for the specificity and sensitivity of your sonnet-detection procedure. Whether it is robust has not been tested. The key point here, which you admit, is that you have to make use of a “truth standard” that allows you distinguish sonnets from non-sonnets, quite independent of your detector. This is an essential part of your method for validating your sonnet-detector. Cool. Now, if you want to convert your sonnet-detector into a limerick detector, you will have adjust some parameters, based on what you know about limericks. Then, to validate your limerick-detector, you will need a “truth standard” that allows you to independently distinguish limericks from non-limericks. Likewise for haiku’s etc., etc. What truth standard do you plan on using to validate your protein-design detector? DNA_Jock
Shakespeare had an extensive dictionary, knowledge of grammar, rhyme, scansion, and verse structure; not to mention an understanding of what people enjoy, and of the human condition. Can you quantify the amount of additional 'information' in a Shakespearean sonnet that is not found in the background knowledge? Zachriel
DNA_Jock: "1) you are equating the ratio of the size of the target space and the size of the total space with a probability. This assumes, incorrectly, that all members are equiprobable." See previous answer. It is true, however, that in my OP I assume a uniform distribution for the characters. "2) Since you are allowing repetition in your 120 words, then about one text in 1,700 will have word duplication. You need to adjust your n! term when this happens. Unlike error (1), this one is “not material” That only means that some permutations will be repeated. That makes the target space even smaller (not much). As I have computed, anyway, a lower threshold for functional complexity, I can't see how that is a problem. "You need to fix error 1 before you can claim to have calculated dFSCI. Good luck." Do you really believe that? Error 1 is not material too. But even if it could increase the probability of the target space a little, do you really believe that such an adjustment would compensate for my choice to use the target space of all combinations of English words instead of the target space of all the combinations of English words which have good meaning in English? OK, you have tried. gpuccio
DNA_Jock: "A) You have not adequately described your null, the so-called “Chance hypothesis”" I have. I have assumed a text generated by a random character generator. An uniform probability distribution is the most natural hypothesis, but it is not necessary. Any probability distribution of the characters will do. Do you want to adjust the probability of each single character according to its probability in English? Be my guest. It would be added information, but OK, I am generous today. Now your piece of English with good meaning is nearer. Are you happy? :) gpuccio
DNA_Jock: Good thoughts, as usual. But I have to disagree on may things. "Oh dear. Post-hoc specifications are suspect because they work perfectly." No. Follow me. I apply the specification "having a good meaning in English". And I make the computation to exclude possible random results. This is a procedure, well defined. I have generated the specification after seeing the sonnet, therefore it is a post-specification. Now, I test my procedure by applying it to ant sequence of 600 characters. I Easily detect those which have good meaning in English, and I infer design for them. Please, not that I am applying my model to new data, not only to the original sonnet. IOWs, I am testing my model and validating it. Now, two things are possible. a) My model works. When I compare the results of my inference with the real origin of the strings (which is known independently, and was not known to me at the time of the inference), I see that all my positive inferences are true positives, there is no false positive, and ny negative inferences are a mix of true negatives and false negatives. b) My model does not work, and a lot of false positives are found among my inferred positives. It's as simple as that. What has happened up to now? More in next post. gpuccio
fifthmonarchyman: Exactly how much of a sonnet is new CSI and how much is borrowed from the background is a great question. Common ground! Shakespeare exhibit a huge amount of background knowledge, of what we call the human condition. fifthmonarchyman: However I’m pretty sure that most folks would say that at least a small amount of Shakespeare’s work was original as apposed borrowed from his environment. We would say a great deal was original to Shakespeare. fifthmonarchyman: If an algorithm can duplicate the pattern by any means whatsoever as long as it is independent of the source string then I discount the originality of the string. A random sequence is original by that definition, and even harder to duplicate. gpuccio: I infer design simply because this is a piece of language with perfect meaning in english Seems rather parochial and subjective. Zachriel
Reality at #245: "gpuccio, are you going to answer my questions about the text I posted above?" I believed that my post #228 was an answer. gpuccio
Biological specification refers to function. We don't care what you call it because we understand that your position cannot account for it regardless. And if you don't like our null we happily await your numbers. We have been waiting for over 100 years... Joe
Gpuccio @ 187
“ALL post-hoc specifications are suspect.”
Except when they work perfectly.
Oh dear. Post-hoc specifications are suspect because they work perfectly. You are shooting yourself in the foot here.
IOWs, we are not trying to sell a drug at the 0.05 threshold of alpha error. I am afraid that you are completely missing the point.
Actually, the analogy is spot on. You are applying Fisherian testing to your data. You have at least three problems. What you and Dembski are doing is “formulating” (and I use the word loosely) a null hypothesis, examining a data set, and asking “what is the probability of getting a result THIS extreme (or more extreme) under my null?” If the probability is below an appropriate threshold, then the null is rejected. Problems A and B are related. A) You have not adequately described your null, the so-called “Chance hypothesis” B) some of you (e.g. Winston Ewert) are performing multiple tests, considering various “chance hypotheses” sequentially, rather than as a whole. I’ve made fun of this previously. Take-home is that, in order to perform the test and arrive at a p value, you need to be able to describe the expected distribution of your metric under the global “Chance Hypothesis”, which includes the effects of iterative selection. One can debate whether this is possible or not, but it is abundantly clear that no-one has even tried. You are indulging in Problem C: you are adjusting your metric after you have seen the data. This is the post-hoc specification. It renders the results of your calculations quite useless. By way of illustration, if you give me a sufficiently rich real-world data set for two groups of patients, X and Y, I can demonstrate that X is better than Y. AND I can demonstrate that Y is better than X, so long as I am allowed to mess with the way “better” is measured. Hence the FDA & EMA's insistence on pre-specified statistical tests. No, four problems! Amongst your prob… I’ll come in again. There’s also a subtle issue around the decision to do a test. If potentially random text is flowing across your desk and you are sitting quietly thinking, “Not sonnety, not sonnety, not sonnety, OOOH! Maybe sonnety, I will test this one!” then you have to be able to model the filtering process, or you’re screwed.
The example of the limerick is he same as saying that I should consider also the probability of chinese poems. As I have explained, at those levels of improbability those considerations are simply irrelevant.
“Yes”, and “Sez you”, respectively
My statement has always been simple: the procedure works empirically, as it is, with 100% specificity. In spite of all your attempts to prove differently.
I have never made any attempt to prove that your procedure does not work “empirically”. With appropriate post-hoc specifications, it should work every time. On anything.
Then, if your point is simply to say that the space of proteins is different from the space of language, that is another discussion, which we have already done and that we certainly will do again. But it has nothing to do with logical fallacies, painting targets, and making the probability arbitrarily small. IOWs with the methodology. IOWs, with all the wrong arguments that you have attempted against the general procedure.
Well I do think they are different, but you asked a specific question at 193 “Is my math wrong?”, so I’ll humor you once more. Two errors: 1) you are equating the ratio of the size of the target space and the size of the total space with a probability. This assumes, incorrectly, that all members are equiprobable. 2) Since you are allowing repetition in your 120 words, then about one text in 1,700 will have word duplication. You need to adjust your n! term when this happens. Unlike error (1), this one is “not material” You need to fix error 1 before you can claim to have calculated dFSCI. Good luck. Thank you REC @ 209 for running the alignment on a decent number of ATPases. 12 residues are 98% conserved. I suspect he might have been better off going with his Histone H3 example, but H3 doesn’t look complicated. Re your reply to him at 232. I won’t speak for REC, but I am happy to stipulate that extant, traditional ATP synthase is fairly highly constrained; you could return the favor by recognizing that this constraint informs us about the region immediately surrounding the local optimum, nothing more. Rather , I think the problem is with your cherry-picking of 3 sequences out of 23,949 for your alignment, which smacks of carelessness. Why not use the full data set? P.S. I did enjoy kf’s treatise at 223 on how NOT to build an amplifier. Gripping stuff. DNA_Jock
Reality- Biological information, as defined by Crick, exists. Your position cannot account for it. And we understand that bothers you. Joe
gpuccio, are you going to answer my questions about the text I posted above? Reality
kairosfocus said: "Personalities via loaded language only serve to hamper ability to understand..." kairosfocus, FOR RECORD, rarely, if ever, have I encountered a person who is as hypocritical and sinister as you are. Your language is thoroughly "loaded" with "personalities". You constantly accuse Keith S and everyone else who disagrees with you or even just questions you of being evil, radical Marxists, liars, and a long list of other despicable things. Your insulting, sanctimonious, malicious, libelous accusations are FALSE and YOU are in dire need of CORRECTION. Sixty of the best with Mr. Leathers would be a good start in that correction. Reality
KS, You have unfortunately confirmed my concern. I will just note a few points for onlookers: 1 --> Personalities via loaded language only serve to hamper ability to understand; this problem and other similar problems have dogged your responses to design thought for years, consistently yielding strawman caricatures that you have knocked over. 2 --> You will kindly note, I have consistently called attention to the full tree of life which as Smithsonian highlights, has OOL at its root. This points to the island of function phenomenon, and that FSCO/I includes that connected with the von Neumann Self Replicator in the cell along with gated encapsulation, protein assembly, code use, integrated metabolism etc. 3 --> Thus, to the need to first explain reproduction from cellular level up before embedding in claimed mechanisms capable of originating body plans. Starting with the first one of consequence, the living cell. 4 --> So also, to the pivotal concern of design theory, to get TO islands of function and how to effectively do so: (a) sparse Blind Watchmaker search vs (b) intelligently directed configuration. Of these, only b has actually been observed as capable of causing FSCO/I. 5 --> Once we have ourselves such, there is no problem in a first life form diversifying incrementally and filling niches in an island of function. chance variation and differential reproductive success and culling [what the misnomer "natural selection" describes . . . nature cannot actually make choices] leading to descent with incremental modifications are fine in such a context. Most of the time, probably, such differential success will only stabilise varieties already existing. 6 --> The onward problem is to move from such an original body plan to major multicellular body plans by blind watchmaker mechanisms, because of the island of function effect of multi-part interactive organisation to achieve relevant function and the consequent sharp constraint on possible configs relative to possible clumped or scattered arrangements of the same parts. Multiplied by sparseness of possible search, the needle in haystack exploration challenge, whether by scattershot or dynamic-stochastic walk with significant randomness. 7 --> That is, once you hit the sea of non-function, you have no handy oracle to guide you on blind watchmaker approaches and you have a non-computable result on resource inadequacy. Body plan origin and more specifically, origin of required FSCO/I by blind Watchmaker mechanisms have no good analytic or observed experience grounds. 8 --> Origin of FSCO/I by intelligently directed configuration aka design is a routine matter, and we have in hand first steps of bio-engineering of life forms. Just yesterday I was looking at a policy document on genetic manipulation of foods. 9 --> So, accusations of dodging NS on your part are a strawman tactic. 10 --> Likewise, I outlined how models are developed and validated, underscoring that the Chi_500 model: Chi_500 = I*S - 500 functionally specific bits beyond the sol system limit . . . is such a model, developed in light of the Dembski 2005 metric model for CSI, and exploiting the fact that logs may be reduced, yielding info metrics in the case of log probabilities. The actual validation is success in recognising cases of design, whilst consistently not generating false positives. False negatives are no problem, it is not intended to spot any and all cases of design . . . the universal decoder wild goose chase. 11 --> I know, you and TSZ generally wish to fixate on debating log [p(T|h)] -- note the consistent omission in your discussions that we are looking at a log-probability metric, i.e. an informational one (and relevant probabilistic hyps as opposed to any and every one that can be dreamed of or suggested or hyperskeptically demanded would be laughed out of court in any t/comms discussion as irrelevant) -- in the Dembski expression. I simply point out by referring to real world dynamic-stochastic cases, that abstract probabilities may often be empirically irrelevant, as there are limits to observability in a sol system of 10^57 atoms and 10^17 s, or the observed cosmos extension. 12 --> As has been repeatedly pointed out and dismissed or ignored, a search of a config space of cardinality W will be a subset and the Blind Watchmaker Search for a Golden Search (S4GS, a riff on Dembski's S4S) . . . and remember search resource sparseness constraints all along . . . will have to address the power set of cardinality 2^W. And that can cascade on, getting exponentially worse. 13 --> So, as has been repeatedly pointed out and ignored, the sensible discussion is of reasonably random searches in the original space, with dynamic-stochastic patterns and sparseness, in the face of deeply isolated islands of function. Such searches are maximally unlikely to succeed. On average, they will perform about as well as . . . flat random searches of the space, which with maximal likelihood, will fail. No surprise, to one who ponders what is going on. 14 --> Where, such gives us a reasonable first estimate of the probability value at stake, if we want to go down that road. P(T) = T/W, starting with either scattershot search or arbitrary initial point dynamic-stochastic walks not reasonably correlated to the structure of the space. No S4GS need apply, in short. 15 --> This can then reckon with the relevant facts that in a computer memory register there is no constraint on bit chains, we can have 00 01 10 or 11. In D/RNA we can have any of ACGT/U following any other. Confining ourselves to the usual, correctly handed AA's, any of the 20 may follow any other of the 20. 16 --> So, reasonably flat distributions are generally reasonable and if we go on to later patterns not driven by chaining but shaped by the after the fact of the DNA code need to be a folding, functioning protein in a cell context, variations in frequency and flexibility in AAs in the chain can be and are factored in in more sophisticated metrics that exploit the avg info per symbol measure - SUM pi log pi. This was discussed with you and other objectors only a few days ago here at UD. 17 --> Once we start with say a first organism with say 100 AAs per protein avg [Cy-C as model], and at least 100, we see coding for 10,000 AAs and associated regulatory stuff and execution machinery as requisites. Self replication requires correlations between codes and other units. At even 1 bit or a modest fraction thereof per AA, the material point remains, the cell is well past FSCO/I thresholds and is designed. 18 --> Just the digitally coded FSCI -- dFSCI -- in the genome is well beyond the threshold. The FSCO/I in the cell is only reasonably explainable on design. The codons just for 10,000 AAs would be 30,000 [which is probably an order of magnitude too low.] 19 --> And, to go on to novel body plans, reasonable genomes run like 10 - 100+ millions dozens of times over. Not credible on Blind Watchmaker sparse search. So, while it is fashionable to impose the ideologically loaded demands of lab coat clad evolutionary materialism and/or fellow travellers even written into question-begging radical redefinitions of science and its methods, the message is plain. Absent question begging the reasonable conclusion is that the world of life is chock full of strong, inductively well warranted, signs of design, with FSCO/I and its subset dFSCI, at their heart. KF kairosfocus
Dembski’s problems are that 1) he can’t calculate P(T|H), because H encompasses “Darwinian and other material mechanisms”;
What a joke! Evolutionists can't provide the probabilities and they think that is our problem?! Evolutionists can't muster a methodology and they think that is our problem?! Amazing Joe
keith s:
The correct question is “Could this sequence have been produced by random variation plus selection, or some other ‘material mechanism’?”
And keith's position cannot answer that question and he thinks that is a poor reflection on ID. Also natural selection doesn't come into play until there is a product that can be "seen" by nature. That is the problem-> unguided evolution can't even muster testable hypotheses. Joe
keith s: This is a blog. My OPs, which are relatively recent, are an attempt at systematizing more my arguments. I have not yet written an OP on the computation of dFSCI, I am still at its definition. It will come. However, as you can see, I am ready to discuss all aspects when prompted. If you accuse me of not being able to discuss everything each time systematically, well, I am certainly culpable for that. And I maintain what I have said: I am perfectly fine with that acknowledgement of my small original contribution. What counts are the ideas, not the people who express them. May I quote Stephen King? "It is the tale, not he who tells it" (Different Seasons) gpuccio
keiths:
Thus, your contribution was nothing more than inventing an acronym for an old and well-known probability calculation.
gpuccio:
I am perfectly fine with that.
keith s
gpuccio,
I have always discussed pre-specification for years. Just check.
And:
I have always included a discussion of the Kolmogorov complexity in my detailed discussions about dFSCI. You can check.
Why are you asking people to track down your comments all over the Internet? These things matter, so include them in the description of your procedure. Show some discipline, write up a complete description of your procedure (like a scientist would), and keep it somewhere handy so that you can paste it into discussions like these. Instead, you're posting half-assed descriptions that don't make sense, and when someone points out an error, you say "Oh, I've covered that elsewhere. You can check." Show some consideration for your readers. If it matters to your procedure, cover it in the procedure description. keith s
keith s: "As you well know, the “RV part” is the only part that factors into the number of bits of dFSCI. You neglect selection, which makes your number useless. KF has the same problem — see my comment above." I don't neglect selection. I discuss it separately, on its own merits. And in detail. "What’s worse, the “RV part” is a standard calculation that was understood by mathematicians long before you were born." I don't pretend that I have invented new mathematical methods. I have applied the following: a) Calculation of the number of combinationa with repetitions for n, k. b) Calculation of the number of permutations of a sequence. c) Simple algebric operations, known to all to a specific context and to specific ideas. "Thus, your contribution was nothing more than inventing an acronym for an old and well-known probability calculation." I am perfectly fine with that. Maybe also to try to discuss some points with some precision. But nothing really original. "And you wonder why scientists laugh at ID?" Yes. But I accept that others can have a sense of humor different from mine. gpuccio
Me_think: "GP has calculated the dFSCI Shakespeare sonnet, so ‘sonnets’ in the context of this thread is Shakespeare sonnets" No. Wrong. I have taken a Shakespeare sonnet as an example, just to give a good face to the concept. but I have never specified the sonnet as "written by Shakespeare". That would be foolish. Look at the OP: " I don’t infer design because I know of Shakespeare, or because I am fascinated by the poetry (although I am). I infer design simply because this is a piece of language with perfect meaning in english (OK, ancient english)." So, being of Shakespeare has never been an issue. Even in my more restricted specifications, I referred to being in rhymed verse and then to being a sonnet in English (for which KF's definition is perfect). gpuccio
gpuccio, to Me_Think:
Because the non design explanation of an observed functional configuration can be based on random variance, or necessity algorithms, or both. In any case, the RV part, either alone or in the context of a mixed algorithm, must be analyzed by dFSCI or any similar instrument, because it is based on probability.
As you well know, the "RV part" is the only part that factors into the number of bits of dFSCI. You neglect selection, which makes your number useless. KF has the same problem -- see my comment above. What's worse, the "RV part" is a standard calculation that was understood by mathematicians long before you were born. Thus, your contribution was nothing more than inventing an acronym for an old and well-known probability calculation. And you wonder why scientists laugh at ID? keith s
keith s: "What gpuccio should have done in his procedure, but failed to do, was to limit the Kolomogorov complexity of the algorithms considered." I have always included a discussion of the Kolmogorov complexity in my detailed discussions about dFSCI. You can check. I have included a brief discussion in my answer to Reality at #228 (you can believe it or not, I had not yet read your post about that. I am going in order). So, please relate to that. gpuccio
keith s: "Gpuccio was sloppy in not excluding this sort of algorithm, but as I said, I’m giving him a pass. He’s got bigger problems than that to deal with." Let me understand: are you saying that an algorithm can print a sequence it already knows? Amazing. This is even better than Methinks it's like a weasel. If you meant other things, please explain. gpuccio
REC: I referred in that post to no errors found in the computation in the OP. I am well aware of your biological arguments. Just as a first comment to what you say: are you really suggesting that there is scarce conservation in that family? Sure, if you align 23949 you will have more variance. But then you must use at least the Durston method, with correct methodology, to detect the level of functional conservation. With three sequences I was making the simple argument that those chains are highly restrained. Are you saying that it is not true? Have you compared that result with other similar results, even with three chains, for other proteins which are much less conserved, or at all unrelated? So I ask again: are you saying that those chains are not highly conserved in that family? gpuccio
kairosfocus #223, Nowhere in that logorrheic mess do you address the actual issue I raised earlier:
KF’s problem is that although he claims to be using Dembski’s P(T|H), he actually isn’t, because he isn’t taking Darwinian and other material mechanisms into account. It’s painfully obvious in this thread, in which Elizabeth Liddle and I press KF on this problem and he squirms to avoid it.
Please, no more thousand-word tap dances. Address the issue. keith s
keith s: "Gpuccio has taken a fatally flawed concept — CSI — and made it even worse." So, I am creative after all! :) gpuccio
keith s: "(I’m giving gpuccio a pass on the fact that there is always an algorithm that can produce any finite sequence, regardless of what it is. He’s having enough trouble defending dFSCI as it is.)" Very generous, but not necessary. Please look at my answer to Reality at #228. gpuccio
Reality at #202: Thank for your contribution, which allows me to clarify a couple of important points. You say: "All I see is you claiming that some English text that is obviously designed or already known to be designed is designed. " Ah! But this is exactly the point. a) "That is obviously designed" is correct, but the correct scientific question is: why is that obvious? And is that "obvious" reliable"? My procedure answers that question, and identifies design with 100% specificity. b) "or already known to be designed" is simply wrong. My procedure does not depend in any way from independent knowledge that the object is designed: we use independent knowledge as a confirmation in the testing of the procedure, as the "gold standard" to build the 2by2 table for the computation of specificity and sensitivity (or any other derived parameter). Please, see also this from my post #37: "Me_Think at #644: “gpuccio explained that dFSCI doesn’t detect design, only confirms if a design is real design or apparent design.” I don’t understand what you mean. dFSCI is essential to distinguish between true design and apparent design, therefore it is an essential part of scientific design detection. If you are not able to distinguish between true design and apparent design, you are making no design detection you are only making recognition of the appearance of design, which is not a scientific procedure because it has a lot of false positives and a lot of false negatives. So, just recognition of the appearance of design is not scientific design detection. On the contrary, dFSCI eliminates the false positives, and design detection becomes a scientific reality. Therefore, dFSCI is an essential part of scientific design detection." Regarding you poetry, it is rather simple. The piece obviously has no good meaning in English. Therefore, we cannot use that specification for it. It is equally obvious that it is made of correct English words. So, it is certainly part of the subset of strings which are made by English words. That is exactly the subset for which I have computed functional information in my OP. As the result was (in the Roy amended form for 500000 English words) 673 bits, we can safely exclude a random origin. So, the question is: can this result be the outcome of an algorithm? The answer is: yes, but not by any natural algorithm, and not by an algorithm simpler than the observed result. IOWs, the only possible source is a designed algorithm which is more complex than the observed sequence. Therefore the Kolmogorov Complexity of the string cannot be lowered by any algorithm. How can I say that? It's easy. Any algorithm which build sentences made by correct English words must use as an oracle at least a dictionary of English words. Which, in itself, is more complex than the poem you presented. Moreover, we can certainly make a further specification (DNA_Jock: is that again painting new targets which were not there before? Is that making the probability arbitrarily small?). Why? Because the poem has an acceptable structure in non rhymed verses. That would certainly increase the functional complexity of the string, but also the algorithm would be more complex (maybe with some advantage for the algorithm here, because after all the verse length is very easy to check algorithmically). However, the algorithm is always much more complex than the observed result, because at least of the oracle it needs. So, the conclusion is easy: the poem is certainly designed, either directly or through a designed algorithm. Many of these objections arise from the simple fact that you always ignore one of the basic points of my procedure, indeed the first step of it. See my post #15: "a) I observe an object, which has its origin in a system and in a certain time span." So, in the end, the question about the algorithms can be formulated as follows: "Are we aware of any explicit algorithm which can explain the functional configuration we observe, and which could be available in the system and the time span?" So, if your system includes a complex designed algorithm to generate strings made by English words by a Dictionary oracle, then the result we observe can be explained without any further input of functional information by a conscious designer. I hope that is clear. gpuccio
Me_Think: "I don’t think Nature cares if a person says it is restricted to do something because he has a flawed algorithm that restricts it from doing something." Now that we know that you are the Oracle for Nature, why bother making science? Stay available, please. gpuccio
keith s: "Sounds like a new rule that you didn’t include in your original procedure. It’s a longstanding bad habit of yours to keep changing your argument in the middle of discussion without acknowledging that you are changing it. Also, drc466 need not use the sequence to specify itself. He can prespecify the target as “the winning numbers for this lottery, whatever they turn out to be.” You know the size of the target, and you know the size of the search space. The ratio is tiny. You’ll get a false positive." :) You really try all that you can, don't you? I have always discussed pre-specification for years. Just check. It's not important to me, because I never use it in any useful context.I have always said that it perfectly legitimate to use a sequence to specify itself, but only as a pre-specification. It expresses the probability of finding that specific sequence again, and the target space is 1. But the sequence bears no functional information, except for the fact that it is in your hands and you can look at its bits. Even a child would understand that. Obviously, "the winning numbers for this lottery, whatever they turn out to be", is a correct specification. It can be used as a post-specification. And it has zero functional complexity: any number is in the target space, because all numbers, if extracted, will be the winning number. Being extracted is in no way connected to the information in the sequence, unless the lottery is fixed. In this case, indeed, a pre-specification of the result which is extremely improbable is a clear sign to infer that the lottery is fixed, that any judge would accept. Unless you believe in lottery precognition (which would have its advantages). False positive? Bah! Do you even think for a moment before posting? gpuccio
Me_Think: "If you also have to check for algorithms which can write sonnets, why bother with dFSCI calculations? You could see if you are aware of algorithms which can produce sonnet or whatever you are examining and if there are none, you can infer design. Why calculate dFSCI?" Because the non design explanation of an observed functional configuration can be based on random variance, or necessity algorithms, or both. In any case, the RV part, either alone or in the context of a mixed algorithm, must be analyzed by dFSCI or any similar instrument, because it is based on probability. That should be very easy to understand. I am really amazed at the insistence with which you and others worry about my "bothering". I understand it's for my sake, but please, relax! :) gpuccio
MT: Someone posted above what is obviously not a Sonnet. KF kairosfocus
KS: For record -- at this stage, with all due respect but in hope of waking you up -- I no longer expect you to be responsive to mere facts or reasoning, as I soon came to see committed Marxists based on their behaviour, back in student days. In that context the self-stultifying circles in your retorts to self-evident first principles of right reason, I find to be diagnostic and of concern. Now, on your attempted talking points of deflection and dismissal of the FSCO/I metric model I developed (as opposed to showed as a theorem derived in the Geometric QED sense) from Dembski's one, in the context of discussions involving VJT, Paul Giem and myself in response to P May wearing the persona Mathgrrl: Chi_500 = I*S - 500, functionally specific bits beyond the Solar System threshold of complexity Just for reference, in Sect A my always linked note you will see a different metric model that goes directly to FSCI values by using info values and two multiplied dummy variables, one for specificity and one for complexity beyond a relevant threshold. That too does the same job, but does not underscore the point that the Dembski model is an info beyond a threshold model. Which replies implicitly to a raft of dismissive critiques. Perhaps, you are unaware of my Electronics background, which is famous for the many models of Transistor and Amplifier action that can do the same job from diverse perspectives. And my favourite is to take an h parameter model and simplify, to where we have hie driving a dependent perfect current source with an internal load and an external one, both shunted to signal ground through the power supply. Weird and mystifying at the first, but very effective until parasitic capacitances have to come into play, whereupon, go for a simplified hybrid pi, until you reach points where wires need to be modelled, and you need to turn everything into waveguides. At which point, go get yourself some heavy duty computational simulations. (Of course, nowadays, we have SPICE fever, with 40+ variable transistor models to cloud the issue. If that sounds like the problems with Economics, you betcha! For me, if it is useful to take Solow, modify with a Human Capital model and spot linking relationships that speak to real world policy challenges, that has done its day's work. As in, tech multiplies labour but depends on lagged investments in human capital to bring a work force to a point of being responsive to the tech, in an era where the 9th grade edu that drove the Asian Miracle is not good enough anymore. Then, we see the investment challenge faced by the would be investor, Hayek's long tail of the temporal/phase structure of investment, malinvestment (perhaps policy induced), instability amplification and roots in a community envt. Which, hath in it the natural, socio-cultural and economic. Thence, interface sectors, on natural resources & hazards and their management, brains as natural resource thus health-edu-welfare issues, and culture of governance vs government institutions and policy making all supporting the requisite pool of effective talent. No need to create a vast body of elaborate pretended Geometric proofs on sets of perfect axioms, reasonable, empirically relevant and supported is good enough for back of the Com 10 envelope Gov't work, what really rules the world. I trust you can catch the philosophy of modelling just outlined. Models were made for man, and not man for models. And don't fool yourselves that just because you can come up with dismissive objections you can go back to your favourite un-examined models that sit comfortably with your preferred worldview. In reality we all are going to be tickling a dragon's tail in any case and should know enough to do so with fear and trembling. And yes the echo of Feynmann's phrase is intended.) The proper judgement of a model is, effectiveness, which is in the end an inductive logic exercise. And so models can be mixed, matched and worked with. Take the Dembski 2005 metric model, carry out the logging operation on its three components, apply the associative rule and see that we have two constants that may be summed to form a threshold value. Note the standard metric of information, as a log metric. Then, note that on reasonable analysis, subsystems of the cosmos may be viewed as dynamic stochastic processes that carry out in effect real world Monte Carlo runs that will explore realistic (as opposed to far-fetched) possibilities . . . think about Gibbs' ensemble of similar systems. It is reasonable to derive a metric model of functionally specific info beyond a threshold, and test it against the base of observable cases. Similarly, to analyse using config space concepts and sampling, by randomness [broadly considered] including dynamic-stochastic processes including in effect random walks with drift (cf. a body of air being blown along, with the air molecules still having a stochastic distribution of molecular velocities and a defined temperature). Notice relevant utter sparseness of possible sampling, whether scattershot or random walks from arbitrary initial conditions makes but little difference. Compare to 10^57 atoms of our solar system considered as observers of trays of 500 coins each. Flip-observe 10^14 times per second for 10^17 s, observe the comparison of a straw to a cubical haystack comparably thick as our galaxy as sample to possibilities. The samples by flipping can be set up to move short Hamming distance random walk hops as you please, it makes no material difference. The point is, by its nature functionally specific, complex organisation and associated information (FSCO/I) sharply constrains effective configs relative to clumped at random or scattered at random possibilities, and is maximally implausible to be found on a blind watchmaker search. Also, the great Darwinist hope of feedback improvement from increasing success presumes starting on an island of function, i.e. it begs the material question. Where, too, FSCO/I is quite recognisable and observable antecedent to any metric models, as happened historically, with Orgel and Wicken. It surrounds us in a world of technology. Consistently, it is observed to be caused by design, by intelligently directed configuration. Trillions of cases in point. Per induction and the vera causa principle, the best current explanation of FSCO/I whether Shakespeare's Sonnets or posts in this thread or ABU 6500 3c Mag reels (there is a whole family of related reels in an island of function above and beyond the effect of good old tolerances), or source or object computer code. Going beyond to explain cases where we did not and cannot observe the actual deep past cause, also of D/RNA and the Ribosome system of protein synthesis that uses mRNA as a control tape. Which is where the root of objections lies. We all routinely recognise FSCO/I and infer design as cause, cf posts in this thread where we generally have no independent, before the fact basis to know they are not lucky noise on the net. After all noise can logically possibly mimic anything. (See the selective hyperskepticism/ hypercredulity problem your argument faces?) Just, when the same vera causa inductive logic and like causes like uniformity reasoning cuts across the dominant, lab coat clad evolutionary materialism and its fellow travellers, with their implausibilities that must be taken without question (or else you are "anti-Science" or you are no true Scotsman . . . ), all the hyperskepticism you wish to see gets trotted out. Because as Mom used to say to a very young KF, a man convinced against his will is of the same opinion still. I say this, to ask you to pause and think again. KF kairosfocus
KF GP has calculated the dFSCI Shakespeare sonnet, so 'sonnets' in the context of this theard is Shakespeare sonnets Me_Think
F/N: COllins English Dict: >> sonnet (?s?n?t) prosody n 1. (Poetry) a verse form of Italian origin consisting of 14 lines in iambic pentameter with rhymes arranged according to a fixed scheme, usually divided either into octave and sestet or, in the English form, into three quatrains and a couplet >> KF kairosfocus
StephenB, fifthmonarchyman I could calculate the Entropy of sonnets and claim the derived value proves sonnets are designed because I dont see any sonnet algorithms in nature. How is that different from dFSCI calculation? gp calculates AND checks there are no natural algorithm and then concludes sonnets are designed. Where is the need to ccalculate any thing at all ? Me_Think
FMM, Apparently you are unfamiliar with the concept of Kolmogorov complexity. What gpuccio should have done in his procedure, but failed to do, was to limit the Kolomogorov complexity of the algorithms considered. keith s
keith's do you honestly think that (print out)=(Produce) or are you just trying to blow smoke for the hell of it? Never mind I know the answer. I agree with StephenB Darwinists can be fun. peace fifthmonarchyman
StephenB, I'm sure it feels good to pretend that ID critics are "whacked out", but doesn't it create some cognitive dissonance for you, since in reality the critics don't conform to your caricature? I am quite clear on what can and cannot be calculated, and what the problems are with each of CSI, FSCO/I, and dFSCI:
Dembski’s problems are that 1) he can’t calculate P(T|H), because H encompasses “Darwinian and other material mechanisms”; and 2) his argument would be circular even if he could calculate it. KF’s problem is that although he claims to be using Dembski’s P(T|H), he actually isn’t, because he isn’t taking Darwinian and other material mechanisms into account. It’s painfully obvious in this thread, in which Elizabeth Liddle and I press KF on this problem and he squirms to avoid it. Gpuccio avoids KF’s problem by explicitly leaving Darwinian mechanisms out of the numerical calculation. However, that makes his numerical dFSCI value useless, as I explained above. And gpuccio’s dFSCI has a boolean component that does depend on the probability that a sequence or structure can be explained by “Darwinian and other material mechanisms”, so his argument is circular, like Dembski’s. All three concepts are fatally flawed and cannot be used to detect design.
keith s
"GP computes dFSCI for the English language" He offered the previously "calculated" example of ATP synthase. ...and how does the fias/co of the English language go...it is specified in the dictionary, and makes sense to us...so intelligence? See above. Is the dFSCI/o of ATP synthase=0? REC
This has been an interesting post. GP computes dFSCI for the English language and his critics cry out, "Yes, but what is it good for?" I am eagerly awaiting his next post, which will likely explain what FSCI is good for, at which time his critics will cry out, "Yes, but can you compute it?" Darwinists are fun--maybe a little whacked out--but fun. StephenB
FMM, An arbitrary finite sequence e[0], e[1], e[2], ..., e[n] can be printed by this obvious algorithm:
for i = 0 to n   print e[i]
Gpuccio was sloppy in not excluding this sort of algorithm, but as I said, I'm giving him a pass. He's got bigger problems than that to deal with. keith s
"An attempt at computing dFSCI for English language" Yes, yes....we're all concerned with objectively demonstrating Shakespeare has intelligence and nothing else. REC
REC Surely you realize the title of this thread is An attempt at computing dFSCI for English language and not An attempt at computing dFSCI for ATP synthase sequences peace fifthmonarchyman
keith's said, I’m giving gpuccio a pass on the fact that there is always an algorithm that can produce any finite sequence, regardless of what it is. I say, check it out http://en.wikipedia.org/wiki/Computable_number and http://arxiv.org/abs/1405.0126 peace fifthmonarchyman
Me_Think says How does checking an algorithm’s availability help you decide if sonnet or proteins is amenable to CSI/dFSCI computation? I say, We are looking for false positives. The harder it is to produce a false positive in an easy test like "good English" text string the more confident we can be that false positives are beyond the reach of algorithms in more difficult cases. peace fifthmonarchyman
" seems that nobody has found any real error in it " I think the summary of errors is as follows (besides that a random search of all sequence space is not the evolutionary hypothesis). 1)Conservation does not correlate with the percentage of sequence space that is functional. (see Rubisco example--all plants, poor enzyme, human design circumventing local optima). ID simply invokes this contra empirical data. 2)You specify the specification (sequence conservation) that you state correlates with function, while considering functional specification...what a cluster. When presented with alternatives in sequence space (and not just any way of making ATP synthase--a proton transporting membrane bound rotary synthase) of little homology, you declare them an independent design! Isn't the point what percent of sequence space is functional, and would be found in a search? 3) Granting your own methodology, you cheat at it. You selected three related sequences for an ATP synthase subunit and aligned them, then declared shared residues necessary. I repeated the process with all 23949 F1-alpha ATP synthase sequences. No F1-alternates. No V-ATPases or N- or other odd ones that can perform the same function. 100% conserved residues: 0 So using your method, no CSI???? hmmm..... maybe the database is off....few oddballs. 98% conserved.....12 residues. (and there are some clear substitutions in otherwise aligned sequences). So maybe next time, try more than.01% of known sequences in defining function/conservation in sequence space. Try it yourself: http://www.ebi.ac.uk/interpro/entry/IPR005294/proteins-matched?start=580 http://mobyle.pasteur.fr/ REC
fifthmonarchyman @ 201,
We we are checking for algorithms we are testing the claim that CSI/dFSCI is not computable If it is not computable in the case of sonnets we can be assured it is not computable in the case of other things like proteins
How does checking an algorithm's availability help you decide if sonnet or proteins is amenable to CSI/dFSCI computation? Me_Think
I should qualify that. The second question isn't really the correct question either, because it assumes a specific target, but it was the question that Dembski was trying to answer with CSI. Gpuccio has taken a fatally flawed concept -- CSI -- and made it even worse. keith s
drc466:
Ah. So your logic only holds for more than 600 characters, then.
That was gpuccio's stipulation, not mine. Take it up with him if you don't like it.
Are you admitting that gpuccio’s calculation would be a valid exercise for length = 10 (“sky is blue”)?
No, gpuccio's calculation is useless for any length, because it answers the wrong question: "Could this sequence have arisen by pure random variation?" The correct question is "Could this sequence have been produced by random variation plus selection, or some other 'material mechanism'?" (I'm giving gpuccio a pass on the fact that there is always an algorithm that can produce any finite sequence, regardless of what it is. He's having enough trouble defending dFSCI as it is.) keith s
Who doubts the "specific functionality" of ATP-ase. Of course proteins are confined to a small portion of the space of all sequences -- I can't imagine an evolutionary biologist who would disagree. As for the rest. This is the croco-duck mistake exported to proteins. wd400
Me think said, I don’t think Nature cares if a person says it is restricted to do something because he has a flawed algorithm that restricts it from doing something. I say, It's not about a particular algorithm. It is about the inherent limitations of all algorithms. There are some things that algorithms simply can not do by definition. Surely you understand this. peace fifthmonarchyman
FMM, Spell checking is not the problem. 'Peculate' is a word, it's just not the right word. keith s
gpuccio, I just don't understand what you're trying to prove. All I see is you claiming that some English text that is obviously designed or already known to be designed is designed. How does that demonstrate that IDists can calculate, measure, or compute (Which is the correct term?) CSI, dFSCI, FSCO/I, or FIASCO, and can verify the intelligent design in or of things that are not obviously designed and not known to be designed? And how does what you're doing establish that CSI, and dFSCI, and FSCO/I are anything other than superficial labels? In regard to English text, what can you tell me about the text below? Is it a sonnet, or what? Does it have meaning? Does it have good meaning? If it has meaning or good meaning, what is it? Was it generated by a conscious being, or by an algorithm? How much CSI, and dFSCI, and FSCO/I does it have? Show your work. O me, and in the mountain tops with white After you want to render more than the zero Counterfeit: o thou media love and bullets She keeps thee, nor out the very dungeon Their end. O you were but though in the dead, Even there is best is the marriage of thee Brass eternal numbers visual trust of ships Masonry, at the perfumed left. Pity The other place with vilest worms, or wealth Brings. When my love looks be vile world outside Newspaper. And this sin they left me first last Created; that the vulgar paper tomorrow blooms More rich in a several plot, either by guile Addition me, have some good thoughts today Other give the ear confounds him, deliver'd From hands to be well gently bill, and wilt Is't but what need'st thou art as a devil To your poem life, being both moon will be dark Thy beauty's rose looks fair imperfect shade, 'you, thou belied with cut from limits far behind Look strange shadows doth live. Why didst thou before Was true your self cut out the orient when sick As newspaper taught of this madding fever! Love's picture then in happy are but never blue No leisure gave eyes against original lie Far a greater the injuries that which dies Wit, since sweets dost deceive and where is bent My mind can be so, as soon to dote. If. Which, will be thy noon: ah! Let makes up remembrance What silent thought itself so, for every one Eye an adjunct pleasure unit inconstant Stay makes summer's distillation left me in tears Lambs might think the rich in his thoughts Might think my sovereign, even so gazed upon On a form and bring forth quickly in night Her account I not from this title is ending My bewailed guilt should example where cast Beauty's brow; and by unions married to frogs Kiss the vulgar paper to speak, and wail Thee, and hang on her wish sensibility green Reality
Keiths, Spell checking is something that algorithms are quite good at so my poor spelling is actually evidence that I'm not an algorithm ;-) Me think said If you also have to check for algorithms which can write sonnets, why bother with dFSCI calculations? I say, We we are checking for algorithms we are testing the claim that CSI/dFSCI is not computable If it is not computable in the case of sonnets we can be assured it is not computable in the case of other things like proteins get it peace fifthmonarchyman
fifthmonarchyman @ 195
In fact I’m not sure the majority of critics have fully grasped that Darwinian evolution is simply an algorithm and is fully subject to any and all the mathematical limitations thereof.
I don't think Nature cares if a person says it is restricted to do something because he has a flawed algorithm that restricts it from doing something. Me_Think
gpuccio:
In all cases where we use the sequence to specify itself, no post-specification is valid.
Sounds like a new rule that you didn't include in your original procedure. It's a longstanding bad habit of yours to keep changing your argument in the middle of discussion without acknowledging that you are changing it. Also, drc466 need not use the sequence to specify itself. He can prespecify the target as "the winning numbers for this lottery, whatever they turn out to be." You know the size of the target, and you know the size of the search space. The ratio is tiny. You'll get a false positive. keith s
gpuccio @ 176
So, according to a general UPB of 500 bits, and being aware of no algorithm (especially non designed) which can write sonnets any more than English text, I can safely infer design for the object,
If you also have to check for algorithms which can write sonnets, why bother with dFSCI calculations? You could see if you are aware of algorithms which can produce sonnet or whatever you are examining and if there are none, you can infer design. Why calculate dFSCI ? Me_Think
FMM:
I think Penrose’s argument has yet to peculate down to Darwinists.
I think you meant 'percolate'. keith s
gpuccio: At least, next time someone makes the old criticism: “you have never really calculated dFSCI”, I can link this thread. :-) But then you have the problem of explaining why the number you calculated isn't completely useless. :-( keith s
Hey gpuccio, I have been devouring the paper you linked since you shared it with Zac. It is very interesting. I agree with your conclusions about the limitations of algorithms. I think Penrose’s argument has yet to peculate down to Darwinists. In fact I'm not sure the majority of critics have fully grasped that Darwinian evolution is simply an algorithm and is fully subject to any and all the mathematical limitations thereof. I will again share my tee shirt equation CSI=NCF in plain English Complex specified information is not computable. How cool is that? peace fifthmonarchyman
wd400: Yes, I meant the general proteome. I expected your answer. It is the same that someone (maybe Joe Felsenstein) gave me time ago, at TSZ I think: "they have been eaten!" I find this answer completely unsatisfying (and this is an euphemism). The point is: no trace at all? I can accept gaps, but not such a universality of gaps. Remember, I am speaking of the basic functional structures, folds, superfamilies, families. In a world where the alpha and beta chain of ATP synthase remains so conserved for billions of years, and still so many intelligent neo darwinists doubt of its specific functionality, it's strange to believe that thousands of necessary functional intermediates, each of which contributed to the process of NS by being positively expanded, IOWs being at least for some time the winner, left no trace at all in natural hystory. gpuccio
Guys, I am really happy. When I wrote this OP, my main worry was if my computation was correct. After almost 200 posts, it seems that nobody has found any real error in it (except for an absolutely due correction of a material error). That's good. At least, next time someone makes the old criticism: "you have never really calculated dFSCI", I can link this thread. :) gpuccio
I guess you don't mean the proteome as it's usally used, to me the set of proteins in a cell, organism or species. But maybe all the proteins that exist? In any case, you wouldn't expect to see intermediates if there were a set paths from A -> B -> ... -> X, because intermediates will be replaced by more favoured variants. The branching nature of evolutionary process creates gaps in extant species/proteins/genes. wd400
drc466: No. It is not a false positive. It is not a positive at all. In all cases where we use the sequence to specify itself, no post-specification is valid. I have just discussed these things with DNA_Jock. The sequence in this case bears no functional information: it is simply extracted. After the extraction, that sequence becomes "the ticket which wins the lottery". But any random sequence extracted would have become that. So, the sequence has no functional specificity. I will try to be more clear with another example. Let's say that I generate a long random sequence, and after that I set it as my safe's password. Again, we have no functional information here in the origin of the sequence. Any random sequence can be used as password, so the probability of generating a random sequence which we can after use as password is 1. The functional specification must be given independently, not using the real bits of the sequence after it has been generated. This is the same error which was made by Mark Frank, when he tried to offer a false positive. Of course, I would never make a design inference for a random number which has been generated randomly and after has been used to specify a function which had no relationship with its sequence before. gpuccio
keith s
Your “sky is blue” example flunks step 1 of the procedure:
1. Look at a comment longer than 600 characters.
Next!
Ah. So your logic only holds for more than 600 characters, then. So, you're dismissing an entire process based on the specific circumstance of "length GE 600". Would you like to then also provide us the correct logic for 599 characters? 437 characters? 53 characters? If gpuccio had chosen a different, shorter length, would you have had the same objection? Or do you have something specifically against the number "600"? Are you admitting that gpuccio's calculation would be a valid exercise for length = 10 ("sky is blue")? drc466
wd400: "H’uh? Why would you expect this, and why in the proteome in particular?" Because each intermediate which is positively selected expands in a population. How can you explain that thousands or millions of expanded functional intermediates have left no trace in the proteome? gpuccio
fifthmonarchyman: Very interesting. Keep us updated. I think that algorithms are extremely limited in power. They cannot generate anything really original, because they have no awareness of either meaning or purpose. Their great power is simply computational. In that, they can operate miracles. But computation is a deductive activity and, even if supported by external inputs, can never understand anything which was not coded in its programs, or conceive of any original function which is not implicit in its premises. Penrose's argument is very powerful on those points. And this paper is very interesting too: http://www.blythinstitute.org/images/data/attachments/0000/0041/bartlett1.pdf gpuccio
DNA_Jock: "ALL post-hoc specifications are suspect." Except when they work perfectly. Design detection is based on the identification of extremely small target spaces. That's what makes the specification empirically perfectly valid, exactly the same reason why the second law of thermodinamics works, and you never observe ordered states in a gas configuration. IOWs, we are not trying to sell a drug at the 0.05 threshold of alpha error. I am afraid that you are completely missing the point. And it's not that I test the sonnet for being a sonnet. I observe that it is a sonnet, and I test how likely it is to have a sonnet of that length in a random system. The example of the limerick is he same as saying that I should consider also the probability of chinese poems. As I have explained, at those levels of improbability those considerations are simply irrelevant. IOWs, where is the false positive? My statement has always been simple: the procedure works empirically, as it is, with 100% specificity. In spite of all your attempts to prove differently. Then, if your point is simply to say that the space of proteins is different from the space of language, that is another discussion, which we have already done and that we certainly will do again. But it has nothing to do with logical fallacies, painting targets, and making the probability arbitrarily small. IOWs with the methodology. IOWs, with all the wrong arguments that you have attempted against the general procedure. gpuccio
keiths, regarding gpuccio's English language test procedure:
You get exactly the same answer whether or not you do the calculation, in 100% of the cases. Why waste time on a calculation that adds no value whatsoever?
drc466:
Exactly…wrong. My “sky is blue” example should have been sufficient, but here’s a longer explanation:
Your "sky is blue" example flunks step 1 of the procedure:
1. Look at a comment longer than 600 characters.
Next! keith s
Zac said, You haven’t inferred it, but intuited it, while providing substantially different conditions for Shakespeare and an algorithm in terms of background information. I say, This is a good point. Exactly how much of a sonnet is new CSI and how much is borrowed from the background is a great question. However I'm pretty sure that most folks would say that at least a small amount of Shakespeare's work was original as apposed borrowed from his environment. Do you agree with this statement? I think there is a way in principle to determine if there is anything original in the particular sequences I'm messing around with. I isolate the sequence completely from it's context and look at it as just a series of numerical values. If an algorithm can duplicate the pattern by any means whatsoever as long as it is independent of the source string then I discount the originality of the string. It seems to be working so far Peace fifthmonarchyman
Adapa said, Amazing that you’ve never heard of fractals or the Mandelbrot set. There is even evidence that the early multicellular life forms in the Ediacaran grew with a fractal format. I say, Funny you should bring up fractals. I have spent a lot of time thinking about fractals and how they relate to CSI. Here is a paper that argues that true fractals can not even exist in nature. Check it out http://www.academia.edu/7030899/Fractal_geometry_is_not_the_geometry_of_nature Peace fifthmonarchyman
gputio, As I commented at 161, "I note in passing that the two cases [i.e. proteins and sonnets] are rather different, hence my lack of interest in the original topic of this thread." But you seen like a nice, if misguided, guy, so I'll play along. It is your choice of specifications that is arbitrary. Why on earth did you not test to see if it was a limerick? Or an composition by an XX-year old student from state YY? (500 alternative specifications right there). There are millions of different specifications against which you could test your sonnet. You chose "sonnet", after the fact, because it looked like a sonnet. This is fine if all you are interested in is whether it is a sonnet or not. Of course, in THAT case, the math becomes superfluous. I believe this has been pointed out to you. But if you wish to calculate the probability of some other process leading to text that meets your specification, then the choice of specification matters. As you demonstrate in your response above. A > B >> C. For statistical purposes, ALL post-hoc specifications are suspect. That is why the FDA and EMA, for example, do not allow them. For some strange reason it amuses me that your specifications do not nest. Not that it matters one iota (since I don't buy the analogy anyway), but Jabberwocky fails A, but meets B. Maybe its just the fuzziness of specification A that cracks me up: "a good meaning in English" Say what? DNA_Jock
I should add, there is abundant evidence for the role of positive natural selection in protein evolution. Ka/Ks (=dN/dS) ratios being the classic example, but there are many more methods to detect such. wd400
Moreover, if positive NS had had some role in generating the biological functional information, we should see tons of traces of naturally selectable functional intermediates in the proteome. We don’t. H'uh? Why would you expect this, and why in the proteome in particular? wd400
gpuccio: The only thing I want is to infer that original sonnets are generated by conscious beings, and not by algorithms. You haven't inferred it, but intuited it, while providing substantially different conditions for Shakespeare and an algorithm in terms of background information. Zachriel
gpuccio, Yeah, I'm not entirely sure my example makes sense, so explaining it is a bit difficult. Let me try again. So, taking our hypothetical lotto of 50 numbers from 1 to 1000, we use a random # generator to generate the single winning number: 001 050 888 273 652 ... 763 299 055 (50 total #'s) Now admittedly this is no Shakespearean sonnet, but it does have meaning, or function, or whatever - it is now the winning number to a lottery. To determine whether this Lottorean sonnet was designed, we calculate the target space and search space. Target Space: 1 (there are no other numbers/sequences that will win) Search Space: 1000 ^ 50, which is approx 2^10^50 or 2^500. Calculation: 2^0 / 2^500 = 2^-500, or 500 bits. So the question becomes: is this a false positive because the 50 #'s were randomly chosen and not "designed", per se - or is this a valid positive because the Lotto was designed, and it is only the Lotto that gives meaning to the number sequence - or is this just a really bad example that fails either way? I'm thinking that the 2nd answer is correct, but I'm struggling with the rationalization somewhat. Hoping you or someone can help. If it still doesn't make sense, don't worry about it, it's not hugely important. drc466
computerist: "gpuccio, what is your opinion that a greater degree of CSI must be present before an ever increasing amount of CSI/dFCSI can be produced?" I am not sure what you mean. Can you explain better? Thank you. gpuccio
Zachriel: "In any case, William Shakespeare had access to a huge amount of preexisting data, such as a mental dictionary, not to mention a huge amount of experience in Elizabethan society. But you want an algorithm to come up with sonnets without any input as to what sells theater tickets." The only thing I want is to infer that original sonnets are generated by conscious beings, and not by algorithms. gpuccio
DNA_Job: Now I have a little more time, and I can answer you better. Let's try this way. You say: "How can you not see that in this analogy the bullet hole represents the biochemical activity?" And: "You can make the probability arbitrarily small by making the specification arbitrarily precise." And you use those concepts, and others, to criticize any post-specification as a logical fallacy. Now, to remain concrete, let's apply these concepts to my sonnet example. Let's take a Shakespeare sonnet of about 600 characters, which I find somewhere, and which I don't know in advance, and don't know is Shakespeare's. Let's say that I don't know anything about it, and that for what I know of its origin it could be a random string. Now, the sonnet, being indeed Shakespeare's, existed before my arrive. I am sorry to disappoint some of my readers, but I am not so old. Therefore, any consideration I can make on the sonnet is a post-specification. Now, I observe three things: a) The sequence has a good meaning in English, which I perfectly understand (and which I immediately like, but this is not relevant). b) It is, indeed, an English composition in rhymed verse. c) It is, indeed, a sonnet (specific verse structure). Now, I take each of these things as specification, in turn, and compute the dFSCI accordingly. So, we have three different post-specifications, and three different computations. For the first specification, I obtain s functional information of at least 673 bits (I am accepting Roy's proposal), certainly vastly underestimated. Now, I don't want to delve into the target space of rhymed verse and of sonnets, so let's just imagine the other two results. It will be enough, for my reasoning. We have already ascertained that there can be ways to compute those numbers indirectly, at least a lower thresholds of complexity. I think we can agree that the target space for b) is smaller than for a) , and for c) it is smaller than for b). So, let's say that b) has a lower threshold of complexity of 1000 bits, and c) of 1500. Just to discuss. So, according to a general UPB of 500 bits, and being aware of no algorithm (especially non designed) which can write sonnets any more than English text, I can safely infer design for the object, according to my procedure, with all three different analyses. OK. Now, your concepts. According to your views, none of the three specifications is valid. All of them are post-specifications. Moreover, you say that the sonnet itself with its functionalities is the bullethole. OK. So, when I arrive and say: "This is a passage with good meaning in English" I am painting an arbitrary target around the object. Is that your idea? There is more. When I say: this is an English composition in rhymed verse, according to your concepts, I am again painting an arbitrary target, only this time I am probably trying to "make the probability of the object arbitrarily small by making the specification arbitrarily precise". A big fallacy, indeed. But I am not satisfied. So I pass to c). Again, I am painting an arbitrary target, and again I am trying to "make the probability of the object arbitrarily small by making the specification arbitrarily precise". What a devious thinker I am! Now, we have a problem. Never satisfied, I still want to go on in "making the probability of the object arbitrarily small by making the specification arbitrarily precise". But I cannot use the real bits in the object, because you have already warned me that, if I do that, I am doomed. And even I, the treacherous pseudo-scientist, know that there are limits that are best left alone. So, I am rather at an impasse. Without using the specifics of the sequence (what rhymes it contains, how many vowels, and so on), it becomes difficult. OK, I have probably one or two options left. I could define the verse (iambic pentameter?). Maybe something else. But how long can I make the probability arbitrarily small by making the specification arbitrarily precise? The point is, up to now I have only described in my specifications real properties of the sonnet. I have invented nothing. OK, I have used different levels of detail, but each one of them was correct. From now on, I should probably invent things that are not there. I don't really feel that I am the "arbiter" of this situation! OK, maybe I will be satisfied with my triple and correct design inference. After all, you will criticize me anyway! :) Ah, and I must really have been born lucky. Nobody has still offered any false positive to my fallacious procedure completely based on post-specifications. gpuccio
Zachriel, No one expects the algorithm to write a sonnet. That's the point. A sonnet is an act of intelligence. No one expects an algorithm to randomly generate a sonnet any more than any one expects a solar powered muddy bog to randomly generate much more complicated life. Edward
Gpuccio,
“NOOOOO. This is terribly wrong, and perhaps at the root of your inability to see the problem. Neutral variation will explore the width of that one local optimum. The extent of neutral variation says NOTHING about whether there are other local peaks, either nearby, or far away (Nina’s bullet holes), or whether there is an even higher peak in the region (see REC’s citations on Rubisco).”
Are you really saying that if I have what you call “a local optimum” which has functional specificity of 1600 bits, and there are a few other distant local optimums for the same function (distant, because they are never found by variation form our first local optimum in billions of years), something changes? How many “local optimums” for ATP synthases do you imagine exist? 2^1000? Is that what you are suggesting?
Well your definition of “distant” is wrong. Tough to say which local optima have never been found, when all we have to go on is the ones that survived. 2^1000 seems possible, but the number could be a lot higher.
“How can you not see that in this analogy the bullet hole represents the biochemical activity?”
Strange. In my discourse, like in all the discourses about the fallacy, the bullet hole is the result of a random act, and the target gives meaning to it (see also Dembski).
And this discourse is no different. :) Is it beginning to dawn on you yet?
So, excuse me, but the bullet hole is some sequence we observe, and the biochemical activity is the target. IOWs, the bullet hole of variation has hit the target of the biochemical activity.
No. You seem unable to grasp the difference between the biochemical activity (which predates humans) and the specification, which is the human paintjob.
My point is that any specification which is complex enough is a marker of design. It’s you who do not understand my point.
Well I agree with you that any specification which is complex enough is a marker of intelligence. But you are trying to claim that an object that meets a sufficiently complex specification must be designed. When the specification is written post-hoc, that is just plain silly.
There is no way to make a function for a sequence “arbitrarily precise”, if I keep the functional specification independent from the sequence itself. [I]OWs, when I specify the sonnet as a passage with good meaning in English, I am not saying “any sonnet with the following characters in this order”
I retained your bit about sonnets here, since it helps clarify your intended meaning. You are claiming that, so long as I stay away from specifying the protein sequence, there is no way for me to make the specification for ATP synthase arbitrarily precise. Here goes: ATP synthase having Km for Mg.ATP between 0.9e-4 and 1.1e-4 Ki for ADP between 2.8e-4 and 3.1e-4 Ks for Mg2+ having the following pH dependence: pH Ks 7.2 1e-4 7.3 0.9e-4 7.4 0.6e-4 7.5 0.4e-4 7.6 0.2e-4 These values at 25 C in 0.1M KCl. At 0.11M KCL, the values should be......should I go on? Or I could add some stuff about the rate at which Mg2+ and ADP cause the inactivation of the enzyme Or temperature-dependence. I haven’t even mentioned the k.cat ‘s The simple fact of the matter is that you, personally, have been caught re-writing your specification in order to retain the “specialness“ of what you now term the “traditional ATP” synthase. DNA_Jock
gpuccio, what is your opinion that a greater degree of CSI must be present before an ever increasing amount of CSI/dFCSI can be produced? computerist
gpuccio: And William Shakespeare included a fine consciousness and sensibility, and much more, which was well beyond the information available to his “algorithm”. In any case, William Shakespeare had access to a huge amount of preexisting data, such as a mental dictionary, not to mention a huge amount of experience in Elizabethan society. But you want an algorithm to come up with sonnets without any input as to what sells theater tickets. Zachriel
gpuccio As said many times, NS is another matter. As there is no algorithm which can explain a complex sonnet, there is no algorithm which can explain a complex function. But that is another part of the reasoning Amazing that you've never heard of fractals or the Mandelbrot set. There is even evidence that the early multicellular life forms in the Ediacaran grew with a fractal format.
Fractal branching organizations of Ediacaran rangeomorph fronds reveal a lost Proterozoic body plan Cuthill, Morris PNAS September 9, 2014 vol. 111 no. 36 Summary: Rangeomorph fronds characterize the late Ediacaran Period (575–541 Ma), representing some of the earliest large organisms. As such, they offer key insights into the early evolution of multicellular eukaryotes. However, their extraordinary branching morphology differs from all other organisms and has proved highly enigmatic. Here we provide a unified mathematical model of rangeomorph branching, allowing us to reconstruct 3D morphologies of 11 taxa and measure their functional properties. This reveals an adaptive radiation of fractal morphologies which maximized body surface area, consistent with diffusive nutrient uptake (osmotrophy). Rangeomorphs were adaptively optimal for the low-competition, high-nutrient conditions of Ediacaran oceans. With the Cambrian explosion in animal diversity (from 541 Ma), fundamental changes in ecological and geochemical conditions led to their extinction.
Simple iterative processes that produce great complexity (and gobs of CSI / dFSCI / FIASCO). Whoda thunk? :) Adapa
Amazing to the materialist, nothing can do anything and everything.... Search spaces, decide on the best solution, reverse engineer, problem solve, build things, create CSI. Nothing is truely a miracle worker it can do everything. All praise nothing!!! Andre
DNA_Jock: "NOOOOO. This is terribly wrong, and perhaps at the root of your inability to see the problem. Neutral variation will explore the width of that one local optimum. The extent of neutral variation says NOTHING about whether there are other local peaks, either nearby, or far away (Nina’s bullet holes), or whether there is an even higher peak in the region (see REC’s citations on Rubisco)." Are you really saying that if I have what you call "a local optimum" which has functional specificity of 1600 bits, and there are a few other distant local optimums for the same function (distant, because they are never found by variation form our first local optimum in billions of years), something changes? How many "local optimums" for ATP synthases do you imagine exist? 2^1000? Is that what you are suggesting? "How can you not see that in this analogy the bullet hole represents the biochemical activity?" Strange. In my discourse, like in all the discourses about the fallacy, the bullet hole is the result of a random act, and the target gives meaning to it (see also Dembski). So, excuse me, but the bullet hole is some sequence we observe, and the biochemical activity is the target. IOWs, the bullet hole of variation has hit the target of the biochemical activity. The biochemical activity is a function. I can define it in different ways. No problem. The point is that if I need a lot of specific bits to implement that function, that function is complex. But there are other complex functions. And so? As I have said, there are sonnets in many languages. Does that invalidate my design inference for a sonnet in English? If it is so, why nobody can provide a false positive to the target I painted on the sonnet? "Here we see the fallacy in its distilled form. Your first sentence is correct. The bullet holes have been there since before humans existed. The PAINT is the human artefact. And how you paint the circles depends on which bullet holes you have discovered to date. As you have inadvertently demonstrated on this thread. Yes ATP was getting synthesized before humans existed, but the specification “ATP synthase” was generated by humans AFTER the biochemical activity was delineated. And re-defined by you in the light of Nina’s work." And all that is completely irrelevant. My point is that any specification which is complex enough is a marker of design. It's you who do not understand my point. "Well, I have yet to see an IDist come up with a post-specification that wasn’t a fallacy. Let’s just say that you have to be really, really, really cautious if you are applying a post-facto specification to an event that you have already observed, and then trying to calculate how unlikely that specific event was. You can make the probability arbitrarily small by making the specification arbitrarily precise." What do you mean? My specification of the Shakespeare sonnet is a post-specification. If it is a fallacy, how is it that it works so well? There is no way to make a function for a sequence "arbitrarily precise", if I keep the functional specification independent from the sequence itself. OWs, when I specify the sonnet as a passage with good meaning in English, I am not saying "any sonnet with the following characters in this order". I am defining a partition which is independent from the specific character sequence. Even the reference to a language, English, has nothing to do with the specific characters in the sequence. Those same characters can be used in many other languages, or in any string without any meaning. The reference to meaning is a direct reference to a conscious experience, and the reference to English is a reference which is independent from the system of a random character generator. Therefore, my specification works. In the same way, the sequence of nucleotides in a protein coding gene, as transformed by RV, is completely independent from function and from the protein space. So, there is no way that I can narrow my definitions so that I can make any results of a random search more likely. As said many times, NS is another matter. As there is no algorithm which can explain a complex sonnet, there is no algorithm which can explain a complex function. But that is another part of the reasoning. gpuccio
“The degree of sequence conservation tells us how tight the peak is at the local optimum. It is rather uninformative about the history of the biochemical activity.”
And what is informative about that history? Just to understand.
Bottom-up studies, such as Keefe and related work. Sadly, you have some strange ideological resistance to these studies, perhaps related to the results they provide.
The same is true for ATP synthase. Nobody can deny the high level of specified information which is necessary for the protein to work in that form.
The issue is with your inclusion of the word “specified” here.
As I have said many times, if many other sequences could be enough for the protein to work, neutral variation would have found many of them. It hasn’t.
NOOOOO. This is terribly wrong, and perhaps at the root of your inability to see the problem. Neutral variation will explore the width of that one local optimum. The extent of neutral variation says NOTHING about whether there are other local peaks, either nearby, or far away (Nina’s bullet holes), or whether there is an even higher peak in the region (see REC’s citations on Rubisco).
How can you not see that a phrase like: “The bullet holes have been in the wall since before any humans existed.” is simply obfuscation? If you are saying that the proteins were there, but they did nothing, and started to work as soon as we looked at them, have the courage to say so.
How can you not see that in this analogy the bullet hole represents the biochemical activity?
No. The proteins were there, and they did exactly what they do now. The bullet holes and the targets were there since before any humans existed.
Here we see the fallacy in its distilled form. Your first sentence is correct. The bullet holes have been there since before humans existed. The PAINT is the human artefact. And how you paint the circles depends on which bullet holes you have discovered to date. As you have inadvertently demonstrated on this thread. Yes ATP was getting synthesized before humans existed, but the specification “ATP synthase” was generated by humans AFTER the biochemical activity was delineated. And re-defined by you in the light of Nina’s work.
the only purpose of this attitude seems to be to discredit a perfectly valid scientific post specification as though it were a logical fallacy, as though any post specification were a fallacy. Which is simply not true.
Well, I have yet to see an IDist come up with a post-specification that wasn’t a fallacy. Let’s just say that you have to be really, really, really cautious if you are applying a post-facto specification to an event that you have already observed, and then trying to calculate how unlikely that specific event was. You can make the probability arbitrarily small by making the specification arbitrarily precise. DNA_Jock
drc466: I am not sure that I understand your example. "Imagine for a moment that you have a lottery that consists of 50 numbers from 1-1000." What do you mean exactly? How is the lottery structured? "Your Target Space is 1 (winning number)," What do you mean? What is the object conveying the information? Or about which you are trying to make the design inference? "and your Search Space is approximately 2^500 (obviously, no lotto would do this because no one would ever win – but it could happen)." Do you mean that there are 2^500 tickets? And one is extracted? But you said "50 numbers from 1-1000". Please, explain better. "The winning number conveys information – “What is the winning number to the Super-Stupid Lotto!”," So, let's say that your object is a paper with the winning number? " and a dFSCI calculation is 500bits." In what sense? That is true only if you define a random system as a method to preview (or guess) the winning number. Obtaining any pre-specified number out of 2^500 by a random search is indeed almost magic. So, let's say that you have a random number generator which gives you a number in one attempt, and you say: this number tomorrow will win the lottery. And then it happens. Many would be suspicious... Perhaps I understand what you mean. In a miraculous pre-announcement of the winning number, the unexplained dFSCI is not in the number itself (which is a simple piece of information), but in the system which chooses it as the future winner. The dFSCI is in the system. So, the two hypotheses are: a) You and the system you use have been extremely lucky (but try to convince the judges) b) The system is designed (IOWs, you fixed the lottery so that you could announce in advance the winner). The design here is not in the number, but in the system. It is not the number itself, or its sequence, which brings the information. Is that what you meant? gpuccio
gpuccio, I have an objection that I hope you will find on-topic. The objection is that I'm not sure that English phrases (or any written form of communication), independent of the "information" they convey, are a valid test of the dFSCI. My "false positive" example is a lottery drawing. Imagine for a moment that you have a lottery that consists of 50 numbers from 1-1000. Your Target Space is 1 (winning number), and your Search Space is approximately 2^500 (obviously, no lotto would do this because no one would ever win - but it could happen). The winning number conveys information - "What is the winning number to the Super-Stupid Lotto!", and a dFSCI calculation is 500bits. So dFSCI would say "Yes, designed", when the winning number was just randomly produced. Is the flaw in my logic that, since a human had to "pick" the winning number, it is "designed"? I'm curious where this argument breaks down. (My objection would be that the information conveyed (winning lotto #) comes in at less than 500 bits, even though the method of conveying it (50 3-digit numbers) comes in at more. Unfortunately, that would seem to invalidate using the symbology as a valid test, and makes the true calculation difficult(impossible?)). drc466
Keith s, The more you argue, the less I think of your comprehension skills. Which is why I always stop arguing with you eventually. One last try:
You get exactly the same answer whether or not you do the calculation, in 100% of the cases
Exactly...wrong. My "sky is blue" example should have been sufficient, but here's a longer explanation: Assume we are trying to detect design in english phrases. We have a computer that is generating a single random phrase, and a person writing a single meaningful sentence. Can we detect which produced the following (e.g. which is designed)? 1) "I" - english word (function); People will agree it could be the computer; dFSCI says unknown; may or may not be designed 2) "SKY IS BLUE" - english phrase (function); looks design-y; people will disagree whether a computer could have kicked it out; dFSCI says unknown; may or may not be designed 3) 600-word Shakespearean sonnet (function); looks design-y; some People will disagree whether a computer could have kicked it out (hard as that may be to believe); dFSCI says MUST BE DESIGNED; must be designed (human wrote it). You're getting hung up because we're discussing easily-recognizable "designed" objects (words, machines, etc.), where "common sense" leads almost everyone to agree on the answer. The whole point of trying to come up with a valid calculation is so that we can use it on functional things that aren't human-made and therefore not easily recognizable - life being one of those. 1) ATP-Synthase/PCD/Flagellum - has function, looks design-y 2) People will disagree whether it was intelligently designed 3) Perform dFSCI calculation 4) Calculation shows that it must be designed 5) People will disagree whether dFSCI is a valid calculation Regardless of point 5, your objections that you get the same answer whether or not you perform the calculation (see points 2 and 3 of my example) is flat wrong, and your objection that the calculation is irrelevant is therefore also wrong. drc466
DNA_Jock: "I know you don’t like it. From your attempts to refute it, it appears that you don’t understand it." Your opinion. I could simply counter that you don't understand ID. "The degree of sequence conservation tells us how tight the peak is at the local optimum. It is rather uninformative about the history of the biochemical activity." And what is informative about that history? Just to understand. "The bullet holes have been in the wall since before any humans existed." As were the biochemical activities. Or am I missing something? "Along comes John Walker: “Look at this bullet hole I found”. Others find a tight grouping of bullet holes around this one. Along comes gpuccio, paints a circle around the bullet holes and calls his circle “the functional specification for ATP synthase”. Does some calculations. Along comes Praveen Nina and others, and points to a bullet hole that is a long, long way away from Walker’s tight grouping, but still falls within gpuccio’s original specification “ATP synthase”. His calculations are destroyed." Absolutely not. What I find is a complex multi chain protein which works in a very brilliant way to generate ATP form a proton gradient. What I find is that this protein, in its rotor part, requires a strong conservation of the sequence of two chains. What I find is that the protein is functional and conserved. My calculations are not destroyed. To infer design, what I need is to find specific information linked to a function. I can redefine the function if necessary, but the concept is that any high level of specific information linked to any explicitly defined function is a mark of design. You seem not to understand that, but it is exactly the reason why I can infer design for the Shakespeare sonnet either I define the function as being s sonnet in English, or more generically as being a passage in English. In both cases, the linked information is extremely high, even if not the same. You seem to forget that our purpose in measuring dFSCI is simply to detect design. I detect design in the sonnet, and I am right. You cannot give a false positive, because my definition of a context which guarantees a correct design inference is right. The same is true for ATP synthase. Nobody can deny the high level of specified information which is necessary for the protein to work in that form. As I have said many times, if many other sequences could be enough for the protein to work, neutral variation would have found many of them. It hasn't. The Alveolata protein is another machine, made with different components. Its complexity is probably comparable to the complexity of the traditional protein, but it is another molecule. That's why it uses other chains, which are different from the chains in the traditional molecule. So, let's say that we have two very different cars, say a small Ford and a Ferrari. They have different carburettors. You cannot mount the Ferrari carburettor in the Ford, and probably they look very different. So you say: "see, they have the same function, but they are very different. That proves that it is very easy to implement the function, any carburettor will do." No, The Ferrari carburettor is different and specific. As different and specific are the chains in traditional ATP synthase. How do we know that they are specific? Because they are extremely conserved. So, all you arguments about painting and post-specification are simply wrong. You obfuscate, certainly in good faith, but you obfuscate just the same. How can you not see that a phrase like: "The bullet holes have been in the wall since before any humans existed." is simply obfuscation? If you are saying that the proteins were there, but they did nothing, and started to work as soon as we looked at them, have the courage to say so. It would be a strange application of quantum mechanics to biology, but at least it would be consistent. No. The proteins were there, and they did exactly what they do now. The bullet holes and the targets were there since before any humans existed. Your obfuscation is that you try to confound methodological problems which legitimately arise when we try to scientifically describe both the bulletholes and the targets with a false argument that we are painting the targets from scratch, and the only purpose of this attitude seems to be to discredit a perfectly valid scientific post specification as though it were a logical fallacy, as though any post specification were a fallacy. Which is simply not true. You say that I don't like your argument. It's true. I don't like it, because it is wrong and unscientific. gpuccio
Me_Think: "How is Shakespeare a ‘Super Design’ ?" It was just a personal appreciation from the heart for the quality of his poetry! gpuccio
DNA jock:
The problem with all formulations of ID to date is that “design” can be generated by processes of trial-and-error, irrespective of whether any intelligent intervention occurred.
Easier said than demonstrated, of course. Joe
gpuccio, Your #150:
DNA_Jock[:] You know what I think of your “painting” argument.
I know you don’t like it. From your attempts to refute it, it appears that you don’t understand it.
With ATP synthase, the problem is different. I chose the alpha and beta subunits of ATP synthase as a good easy example of very high dFSCI, which they are, because they are long sequences with very high conservation and a very clear function in the context of a bigger multi-sequence molecule. As you well know, it is not an isolated example of high functional conservation. I have mentioned also histone H3, which is shorter but even more conserved. I have always said clearly that those two sequences are only part of a more complex molecule. The Alveolata ATP synthase is another complex molecule, which uses other sequences. In no way that means that the specific sequence of alpha and beta chains of the common form of ATP synthase is not essential to the functioning of the molecule as it is. If that were not the case, why would those AA positions have been conserved in spite of all possible neutral variation?
The degree of sequence conservation tells us how tight the peak is at the local optimum. It is rather uninformative about the history of the biochemical activity.
I am not redefining anything. I have always reasoned about the molecular assembly of ATP synthase “in common form”. For the working of that molecular assembly, those two chains are essentially conserved and necessary.
Here you show your failure to comprehend the objection. I will try to explain one more time. The bullet holes have been in the wall since before any humans existed. Along comes John Walker: “Look at this bullet hole I found”. Others find a tight grouping of bullet holes around this one. Along comes gpuccio, paints a circle around the bullet holes and calls his circle “the functional specification for ATP synthase”. Does some calculations. Along comes Praveen Nina and others, and points to a bullet hole that is a long, long way away from Walker’s tight grouping, but still falls within gpuccio’s original specification “ATP synthase”. His calculations are destroyed. In light of Nina et al, gpuccio does two things: 1) He re-draws his circle so that it now excludes Alveolata, and renames the Walker circle “the traditional ATP synthase” 2) he draws a brand-spanking-new circle around the Alveolata bullet hole(s) because it is a “very different complex molecule, made of many different protein sequences, and is a complex example of a different engineering solution”. Mischief managed. How can you not see that all of your specifications are post-hoc?
You must clarify your position: are you denying that it is possible to measure functional complexity, in language as in proteins? Or are you just suggesting a better way to do it?
I am not denying that it is possible, in either case. I think it is rather difficult, in both cases. I note in passing that the two cases are rather different, hence my lack of interest in the original topic of this thread.
If you are in the denialist position, I invite you to explain what is wrong in my reasoning about the Shakespeare Sonnet, and then to provide a false positive, or just explain how is it that a wrong reasoning works so well.
I am happy to stipulate that “design” can be detected. The problem with all formulations of ID to date is that “design” can be generated by processes of trial-and-error, irrespective of whether any intelligent intervention occurred. There is a strange irony to the fact that one of your objections to Keefe & Szostak is that they chose ATP binding. You chose “ATP synthase”, rather than “APP synthase” or an infinite number of other biochemical activities, because it exists. They, at least, have an excuse. DNA_Jock
GP @ 154
I am well aware that mine is “a severe under estimate”, but it seems good enough to disturb our interlocutors a little! :) And I agree, absolutely, that Shakespeare is “super design”.
How is Shakespeare a 'Super Design' ? Me_Think
fifthmonarchyman @ 147
me think said, What do you mean by moving down the Y-Axes? I say,check out comment 81
What you should do is check the entropy. Me_Think
Adapa:
The purpose of a dFSCI calculation is not to convince anyone in the scientific community of its design detection worth.
This alleged scientific community doesn't have any methodology that comes close to being as good as CSI and dFSCI. That means their complaints are just whining. Joe
DNA jock- REC cannot explain any ATP synthase. Unguided evolution is incapable of producing them. Joe
Zachriel:
Evolutionary algorithms require an interface to an environment of some sort.
Evolutionary algorithms are examples of intelligent design evolution. They don't have anything to do with unguided evolution. Joe
#140 fifthmonarchyman Very interesting indeed. Thank you for the link to the PDF document that inspired you to work on that project. :) Dionisio
KF: I am well aware that mine is "a severe under estimate", but it seems good enough to disturb our interlocutors a little! :) And I agree, absolutely, that Shakespeare is "super design". As are many exceptional proteins whose biochemical efficiency is overwhelming. I think we agree that a design inference does not necessarily imply optimal design. But, when we observe optimal design, it's simple fairness to recognize it. Our heartfelt gratitude, then, to Shakespeare and to all the great designers in this world. gpuccio
F/N 2: GP thanks, I snatch another quick pause. The confinement to English text alone already builds in a whole apparatus of rules, conventions, and structures that are FSCO/I rich, so the estimations you do will be quire conservative, a severe under estimate. I tend to think physically and so I think in terms of a string register with the possibility of something like zener noise filling it, and that defines the point that any of the 128 ASCII codes can appear, that whether or not that is flat random that is not constrained by the physics at work. Thus, that this leads to the situation where the real space of possibilities for a register of length n seven-bit characters, is 128^n. Just 72 ASCI characters would exhaust the resources of the sol system, and 143 those of the observed cosmos, to generate anything more than a very sparse, vanishingly sparse, sample that we only have reason to expect will snapshot the bulk, not special zones such as text in Elizabethan English. However the message is still the same, text in such patterns is reflective of such special characteristics that are separately identifiable that we have no good reason to expect that on blind search whether scattershot or random walk, we will reasonably ever produce such. Design routinely produces such, though Shakespeare is anything but routine. KF kairosfocus
Zachriel: "Evolutionary algorithms require an interface to an environment of some sort. Turns out that Shakespeare also incorporated information from his cultural environment. For instance, the William Shakespeare algorithm included an extensive dictionary, grammar rules, stock phrases, scansion, personality types, history, and so on." And William Shakespeare included a fine consciousness and sensibility, and much more, which was well beyond the information available to his "algorithm". I must say that a phrase like "the William Shakespeare algorithm", used by an intelligent person like you (and, I am sure, purposefully), has a strange effect on me. Not really good. gpuccio
KF: Thank you! :) gpuccio
DNA_Jock: You know what I think of your "painting" argument. With ATP synthase, the problem is different. I chose the alpha and beta subunits of ATP synthase as a good easy example of very high dFSCI, which they are, because they are long sequences with very high conservation and a very clear function in the context of a bigger multi-sequence molecule. As you well know, it is not an isolated example of high functional conservation. I have mentioned also histone H3, which is shorter but even more conserved. I have always said clearly that those two sequences are only part of a more complex molecule. The Alveolata ATP synthase is another complex molecule, which uses other sequences. In no way that means that the specific sequence of alpha and beta chains of the common form of ATP synthase is not essential to the functioning of the molecule as it is. If that were not the case, why would those AA positions have been conserved in spite of all possible neutral variation? I am not redefining anything. I have always reasoned about the molecular assembly of ATP synthase "in common form". For the working of that molecular assembly, those two chains are essentially conserved and necessary. You must clarify your position: are you denying that it is possible to measure functional complexity, in language as in proteins? Or are you just suggesting a better way to do it? If you are in the denialist position, I invite you to explain what is wrong in my reasoning about the Shakespeare Sonnet, and then to provide a false positive, or just explain how is it that a wrong reasoning works so well. gpuccio
GP, busy -- doubly so today -- but spotted this; in the Intro-summary IOSE, the Voynich Manuscript is featured. Part of the problem is a confusion that design recognition is a universal decoding process, which obviously is highly dubious on computation theory. Just the drawings as well as the context of being a codex, are enough to show design per manifest FSCO/I. By whom, why, with what possible decoding of the apparent text, are other questions well beyond the core issue of the design inference. Gone, KF kairosfocus
gpuccio at#124
My argument about those two sequences is about their conservation in a complex molecule. You can scarcely deny that those specific sequences are necessary, with that high level of conservation, to the working of ATP synthase in its common form, and especially the form which utilizes H+ gradients. The Apicomplexa paper you link describes a very different complex molecule, made of many different protein sequences, and is a complex example of a different engineering solution. In no way it is in contradiction with the functional specification of the sequences I examined in the traditional ATP synthase complex.
Another beautiful example of the Texas Sharp-Shooter. You were quite satisfied with your specification of the "ATP synthase", a nice tight cluster of bullet holes in the wall. Then REC points out a separate cluster of bullet holes, the Alveolata ATP synthase. Immediately you re-define your "ATP synthase" as "ATP synthase in its common form" or "the traditional ATP synthase", and get out some fresh paint for the recently observed bullet holes, which represent "a very different complex molecule, made of many different protein sequences, and is a complex example of a different engineering solution." DNA_Jock
me think said, What do you mean by moving down the Y-Axes? I say, check out comment 81 peace fifthmonarchyman
hey Zac You said, If different people give entirely different answers, then it’s not objective. I say, In one sense I agree with you. By objective I mean that my standard is exactly the same for different objects. Your standard might be lower or higher on the Y access than mine but it you should be consistent with yourself when it comes to the X axes, Hope that makes sense peace fifthmonarchyman
fifthmonarchyman @138,
I think a good next step would be to actually use your calculation on the Voynich manuscript. I agree that there is enough CSI there to infer design it would be cool however to objectively compare the actual amount in the object with that in the sonnet.
I think that is too ambitious and is not within the realms of dFSCI /CSI , because you don't have a standard dictionary database of the Voynich script, have no idea of alphabet probability and no way of checking the result. You need to be an Egyptian hieroglyphs expert to even start deciphering a single word.
To do that we would need to move further down the Y axes
What do you mean by moving down the Y-Axes? Me_Think
Guys: As a shameful form of self-promotion, I will try to draw again the attention to the OP and the computation in it. To do that, I will repost here what I said in post #51:
So, if the computation here is correct, a few interesting things ensue: 1) It is possible to compute the target space, and therefore dFSCI, for specific search spaces by some reasonable, indirect method. Of course, each space should be analyzed with appropriate methods. 2) Nobody seems to object that he knows some simple algorithm which can write a passage of 600 characters which has good meaning in English. Where are all those objections about how difficult it is to exclude necessity, and about how that generates circularity, and about how that is bound to generate many false positives? The balance at present: a) Algorithms proposed to explain Shakespeare’s sonnet (or any other passage of the same length in good English): none. b) False positives proposed: none. c) True positives found: a lot. For example, all the posts in the last thread that were longer than 600 characters (there were a few). 3) We have a clear example that functional complexity, at least in the language space, is bound to increase hugely with the increase in length of the string. This is IMO an important result, very intuitive, but now we have a mathematical verification. Moreover, while the above reasoning is about language, I believe that it is possible in principle to demonstrate it also for other functional spaces, like software and proteins.
I would like to spend a few more words on point 1. The essence of point 1 is that the computation of a target space can be done by indirect methods, but that we must eagerly look for the best method to do that in each case. To those who criticize the approach of Durston and my personal approach to the computation of the target space for functional proteins, I just say: OK, propose your approach. Maybe ti will be better. But there is no reason to deny that an interesting problem exists, that we must look for the best solutions, and that the problem has important implications for the problem of the origin of biological information. CSI denialism has no real place in science. gpuccio
fifthmonarchyman: For some one familiar with this debate “me thinks it’s a weasel” is loaded with meaning. For an average Joe it might take a whole sonnet to pass the threshold. On the other hand if I were looking at a string of text in Chinese it might take a string the length of a whole play to pass the tet because I would be looking for mere arbitrary structure and grammar as apposed English words. In other words, it's subjective. fifthmonarchyman: But even in that case I would be able to give the sequence a real objective value and compare it to strings that were the result of combination of algorithmic and random processes If different people give entirely different answers, then it's not objective. fifthmonarchyman: produce an algorithm capable of producing a 600-character English text independently with out smuggling information through the back door. Evolutionary algorithms require an interface to an environment of some sort. Turns out that Shakespeare also incorporated information from his cultural environment. For instance, the William Shakespeare algorithm included an extensive dictionary, grammar rules, stock phrases, scansion, personality types, history, and so on. mullerpr: Natural processes flowing from the uniformity of classical mechanics … Is that really going to be presented as an analogue for Natural Selection? You didn't ask for an analogue for natural selection, but examples of natural sieves. Zachriel
I linked the wrong paper. Here is the one I meant http://arxiv.org/pdf/1002.4592.pdf fifthmonarchyman
fifthmonarchyman: "To do that we would need to move further down the Y axes and look at the arbitrary structure and grammar instead of “good English”. That might be a little more difficult but I believe it’s still doable." Yes, I believe it's doable. It is not my personal priority, however. And thank you for the kind words. gpuccio
Dionisio said That sounds interesting. I say Thank you I think it's way cool too. Right now I come at my calculation in a different way than gpuccio. By graphically comparing an actual data string with a scrambled set of the same data. Then I try to quantify the the differences between the two strings. You can find the paper that was my inspiration here https://www.cs.duke.edu/~conitzer/turingtradeAAMAS09demo.pdf peace fifthmonarchyman
#137 follow-up Discussions between people with irreconcilable worldview positions turn into senseless arguments that lead nowhere. However, apparently they provide some entertainment, like gladiators and lions provided to the public in the Roman coliseum many years ago. That's why they have clowns in the circus. Perhaps that increases attendance, traffic and ad revenues. There's also a strong argument for allowing this for the sake of the onlookers/lurkers visiting this blog and also to sharpen the ID arguments. I don't quite agree with some of these arguments, but respect the opinions of others. :) Dionisio
Hats off to you gpuccio this is a great thread. I think a good next step would be to actually use your calculation on the Voynich manuscript. I agree that there is enough CSI there to infer design it would be cool however to objectively compare the actual amount in the object with that in the sonnet. To do that we would need to move further down the Y axes and look at the arbitrary structure and grammar instead of "good English". That might be a little more difficult but I believe it's still doable. Peace fifthmonarchyman
#112 Reality
D: “Why do you want to see a calculation?"
Because you IDists claim that you can calculate CSI-dFSCI-FSCO/I.
D: “Is that important to you? Why?”
To see if you can, and laugh at you when you can’t.
D: “If an example is given, would you ask for another?”
Yes.
D: “If ten examples are provided, would you demand eleven?”
Provide ten and then we’ll see.
Thank you for answering my questions. Now every reader in this blog can see that you have revealed, very clearly, your own motives for being here in this blog. Very probably your comrades and fellow travelers would have answered exactly as you did. Which is exactly what I (and probably others) suspected. Dionisio
Graham2: I suppose that there is enough CSI in the Voynich manuscript to easily infer design for it. Even the illustrations would be enough. Decrypting the meaning, if there is a meaning, is all another matter. Obviously, we cannot infer design from the meaning, if we are not sure that there is a meaning. If our inference depended only on the possible meaning (which is not the case for that object), we would not infer design unless and until a meaning is found. In the worst case, that would simply be a false negative. As said many times. gpuccio
Has KairosFocus been baned from this thread? sparc
#133 mullerpr Thank you. Dionisio
Dionisio, the link was just to the Amazon page for Yockey's book. Information Theory, Evolution, and The Origin of Life http://www.amazon.com/gp/aw/d/0521169585?pc_redir=1414569767&robot_redir=1 mullerpr
Perhaps CSI could be applied to the Voynich manuscript to determine if its designed or not. You would be doing the whole world a favour. Graham2
Me_Think: Thank you, you are making my argument. You cannot distinguish between designed things and non designed things, unless the object exhibits functional complexity. Why? Because natural mechanisms, through randomness or necessity, can generate configurations that are functional, but only with low functional complexity. That's why the computation of dFSCI is necessary to reliably infer design. Could you please explain that to keith? gpuccio
keith s: You are really trying your worst. The meaning is really obvious, and you are not stupid. What should I think? The meaning is: Procedure 2 is useless as a separate procedure, because it is the same as procedure 1. The real useless thing here is your "argument". gpuccio
The Chinese letter B is written as 'tt'. If dFSCI is calculated for this letter, wouldn't it be less than 500 bits? So is it designed or not ? A splatter left on wall by a stone falling in a water puddle by gravity or a splatter on wall by stone dropped by a person on puddle would (I guess)have pretty much same dFSCI. How will you distinguish between the two ? A man-made crop circle and similar natural crop circle would present the same problem. Me_Think
Ok Keith S is confused, but we can't be certain, even he said so...... Andre
gpuccio, In his #110, PaV says that procedure 2 is useless:
Gpuccio’s DFCSI isn’t useless, your Procedure 2 is useless.
You agree wholeheartedly:
PaV at #110: Absolutely correct! Thank you.
You then tell me that procedure 1 and procedure 2 are the same:
As explained, your procedure 2 is the same procedure, and implies the calculation.
You and PaV agree that procedure 2 is useless. You tell me that Procedure 1 is the same as Procedure 2. Therefore, Procedure 1 is useless, according to you. Oops. keith s
keith s: As explained, your procedure 2 is the same procedure, and implies the calculation. Why do you speak of 600 characters? (a definite complexity threshold) Why do you speak of "meaningful in English"? (a definite functional specification) You are simply giving my procedure in its final form, without the logical explanations. My compliments! gpuccio
PaV,
If you omit step #3 of Procedure 1 in Procedure 2, then step#3 in Procedure 2 is completely meaningless.
Exactly! I think you're close to understanding this! Steps 3 and 4 are useless in procedure 1, and step 3 is useless in procedure 2. All of the useful work is done by steps 1 and 2:
1. Look at a comment longer than 600 characters. 2. If you recognize it as meaningful English, conclude that it must be designed.
The calculation adds nothing. Now, could you please point this out to gpuccio before he embarrasses himself further? He won't accept it from me, but he might from you.
Gpuccio’s DFCSI isn’t useless, your Procedure 2 is useless.
Procedure 1 gives exactly the same answers as Procedure 2. You say Procedure 2 is useless. Therefore, Procedure 1 is also useless. Excellent job, PaV. You're a real asset to the ID team! keith s
REC at #91: My argument about those two sequences is about their conservation in a complex molecule. You can scarcely deny that those specific sequences are necessary, with that high level of conservation, to the working of ATP synthase in its common form, and especially the form which utilizes H+ gradients. The Apicomplexa paper you link describes a very different complex molecule, made of many different protein sequences, and is a complex example of a different engineering solution. In no way it is in contradiction with the functional specification of the sequences I examined in the traditional ATP synthase complex. I paste here the abstract of that interesting paper, for all to read: "Highly Divergent Mitochondrial ATP Synthase Complexes in Tetrahymena thermophila Abstract The F-type ATP synthase complex is a rotary nano-motor driven by proton motive force to synthesize ATP. Its F1 sector catalyzes ATP synthesis, whereas the Fo sector conducts the protons and provides a stator for the rotary action of the complex. Components of both F1 and Fo sectors are highly conserved across prokaryotes and eukaryotes. Therefore, it was a surprise that genes encoding the a and b subunits as well as other components of the Fo sector were undetectable in the sequenced genomes of a variety of apicomplexan parasites. While the parasitic existence of these organisms could explain the apparent incomplete nature of ATP synthase in Apicomplexa, genes for these essential components were absent even in Tetrahymena thermophila, a free-living ciliate belonging to a sister clade of Apicomplexa, which demonstrates robust oxidative phosphorylation. This observation raises the possibility that the entire clade of Alveolata may have invented novel means to operate ATP synthase complexes. To assess this remarkable possibility, we have carried out an investigation of the ATP synthase from T. thermophila. Blue native polyacrylamide gel electrophoresis (BN-PAGE) revealed the ATP synthase to be present as a large complex. Structural study based on single particle electron microscopy analysis suggested the complex to be a dimer with several unique structures including an unusually large domain on the intermembrane side of the ATP synthase and novel domains flanking the c subunit rings. The two monomers were in a parallel configuration rather than the angled configuration previously observed in other organisms. Proteomic analyses of well-resolved ATP synthase complexes from 2-D BN/BN-PAGE identified orthologs of seven canonical ATP synthase subunits, and at least 13 novel proteins that constitute subunits apparently limited to the ciliate lineage. A mitochondrially encoded protein, Ymf66, with predicted eight transmembrane domains could be a substitute for the subunit a of the Fo sector. The absence of genes encoding orthologs of the novel subunits even in apicomplexans suggests that the Tetrahymena ATP synthase, despite core similarities, is a unique enzyme exhibiting dramatic differences compared to the conventional complexes found in metazoan, fungal, and plant mitochondria, as well as in prokaryotes. These findings have significant implications for the origins and evolution of a central player in bioenergetics. Author Summary Synthesis of ATP, the currency of the cellular energy economy, is carried out by a rotary nano-motor, the ATP synthase complex, which uses proton flow to drive the rotation of protein subunits so as to produce ATP. There are two main components in mitochondrial F-type ATP synthase complexes, each made up of a number of different proteins: F1 has the catalytic sites for ATP synthesis, and Fo forms channels for proton movement and provides a bearing and stator to contain the rotary action of the motor. The two parts of the complex have to interact with each other, and critical protein subunits of the enzyme are conserved from bacteria to higher eukaryotes. We were surprised that a group of unicellular organisms called alveolates (including ciliates, apicomplexa, and dinoflagellates) seemed to lack two critical proteins of the Fo component. We have isolated intact ATP synthase complexes from the ciliate Tetrahymena thermophila and examined their structure by electron microscopy and their protein composition by mass spectrometry. We found that the ATP synthase complex of this organism is quite different, both in its overall structure and in many of the associated protein subunits, from the ATP synthase in other organisms. At least 13 novel proteins are present within this complex that have no orthologs in any organism outside of the ciliates. Our results suggest significant divergence of a critical bioenergetic player within the alveolate group." gpuccio
gpuccio, You get exactly the same answer whether or not you do the calculation, in 100% of the cases. Why waste time on a calculation that adds no value whatsoever? I repeat:
gpuccio, We can use your very own test procedure to show that dFSCI is useless. Procedure 1: 1. Look at a comment longer than 600 characters. 2. If you recognize it as meaningful English, conclude that it must be designed. 3. Perform a pointless and irrelevant dFSCI calculation. 4. Conclude that the comment was designed. Procedure 2: 1. Look at a comment longer than 600 characters. 2. If you recognize it as meaningful English, conclude that it must be designed. 3. Conclude that the comment was designed. The two procedures give exactly the same results, yet the second one doesn’t even include the dFSCI step. All the work was done by the other steps. The dFSCI step was a waste of time, mere window dressing. Even your own test procedure shows that dFSCI is useless, gpuccio.
keith s
PaV: Thank you for your contributions. It's beautiful to have you here! :) gpuccio
Reality at #112: Is what you see in this OP a calculation? gpuccio
PaV at #110: Absolutely correct! Thank you. gpuccio
Adapa: "The purpose of a dFSCI calculation is merely for gpuccio to convince himself he was specially created by his loving God." Really? What an argument. I am overwhelmed. gpuccio
keith s #101: "The calculation is completely unnecessary." Why? Guys, please clarify how can you reliably infer design for the sonnet without any calculation. gpuccio
Me_Think at #100: "The answer is :no. I can see sonnet is designed without the need to calculate" How? gpuccio
keith s: Again, the aim of this thread is not to re-discuss the whole issue of dFSCI and design detection, but only to propose a computetion of dFSCI in language. I have discussed in great detail the "any possible function" argument here: https://uncommondesc.wpengine.com/intelligent-design/evolution-driven-by-laws-not-random-mutations/ Post #400. As I have already said, I don't like repetition. I have discussed the role of eliminating necessity in that same thread, for example, at posts #599 and #604. As I have already said, I don't like repetition. I have been discussing the lack of explanatory power of the RV + NS myth for years, in very great detail. You can find some thoughts on the difference between Natural Selection and Intelligent selection at post #524 of the above referenced thread, and a lot of other detailed stuff in posts of mine practically everywhere at UD. As I have already said, I don't like repetition. But just to humot you a little, a very very brief summary: Negative NS is a powerful mechanisms, and it works essentially against the RV + NS algorithm. Positive NS is almost non existent, limited to a few irrelevant microevolutionary scenarios, and can never help generate new complex functions, because complex functions cannot be deconstructed into naturally selectable simpler steps, neither in the general case (which is requested for the algorithm to work) nor in any single real example (which would at least be start). Moreover, if positive NS had had some role in generating the biological functional information, we should see tons of traces of naturally selectable functional intermediates in the proteome. We don't. Finally, genetic drift is completely irrelevant to the probabilistic computation, and in no way helps to lower the probabilistic barriers. gpuccio
keith s:
Evolution does not seek out specific targets. It isn’t “trying” to find the flagellum, or binocular vision, or opposable thumbs. If it stumbles on something good, whatever that happens to be, it keeps it. If it stumbles on something bad, whatever that happens to be, it tosses it.
Yes, NS "stumbles." What is given it comes about "randomly;" but, all NS does, and can do, is either 'eliminate,' or 'not eliminate.' When the "search space" is enormous, then an 'enormous' number of 'eliminations' must take place. It is simply impossible for 'nature' to provide this enormity of possibilities. Hence, NS is rendered, except in minor ways, "useless." The "minor ways" where NS is "useful," we call "microevolution." But, this is a digression since gpuccio is simply trying to demonstrate that dFCSI calculations can eliminate "false positives." PaV
keith s:
The dFSCI number reflects the probability that a given sequence was produced purely randomly, without selection. No evolutionary biologists thinks the flagellum (or any other complex structure) arose through a purely random process; everyone thinks selection was involved. By neglecting selection, your dFSCI number is answering a question that no one is asking. It’s useless.
Why are you substituting a question regarding "irreducible complexity" for one that involves the random generation of DNA strings? gpuccio's argument is not about IC. Yes, NS does act on what comes about randomly, and thus, there is a non-random component to the process. Nevertheless, the only thing NS does is to "eliminate" that which cannot either 'live' or 'compete.' NS doesn't 'form' the DNA string, it either accepts or eliminates. The "sonnet" that gpuccio is using for his example represents a "protein" that is found in nature, encoded in extant DNA. There are only so many known proteins and protein families. If each DNA string that is generated---you're not positing that DNA is generated non-randomly are you?---is generated 'randomly,' then the proteins and protein families we know of are the "survivors" of NS---as in "survival of the fittest." This means that the "sonnet" represents one of a number of acceptable forms on English words pieced together in a string of 'letters' that runs 600 letters long. It is like a protein family. The entire collection of such "combinations" represents an entirety of all such "protein families" found in nature, and, thus, presumably culled by NS from the "search space" strings of length 600 letters. Your invocation of NS does nothing to change his calculations, nor his logic. PaV
fifthmonarchyman at #81: Absolutely correct! :) gpuccio
Dionisio asked: "Why do you want to see a calculation?' Because you IDists claim that you can calculate CSI-dFSCI-FSCO/I. "Is that important to you? Why?" To see if you can, and laugh at you when you can't. "If an example is given, would you ask for another?" Yes. "If ten examples are provided, would you demand eleven?" Provide ten and then we'll see. "Is it possible, like someone suggested today, that you were hired by this blog to write what you write, in order to provoke certain folks to keep heated arguments, hence increase the number of posts in the discussion threads and increase the traffic in the blog?" It's possible but extremely unlikely. So unlikely that it's safe to say that what you implied is incredibly childish and trollish. How old are you, 6, 7? Reality
Keith S
Exactly. The calculation is completely unnecessary.
But I must protest here Keith! Did you recognise a design and accept it? Design all around us then but not in biological systems? Why would that be? I'll tell you why, you have to deny design in biology because if you accept it you have to accept that you have been created by a designer, I believe that you find that idea repugnant, and I even know why, and so do you! Andre
keith s:
gpuccio, We can use your very own test procedure to show that dFSCI is useless. Procedure 1: 1. Look at a comment longer than 600 characters. 2. If you recognize it as meaningful English, conclude that it must be designed. 3. Perform a pointless and irrelevant dFSCI calculation. 4. Conclude that the comment was designed. Procedure 2: 1. Look at a comment longer than 600 characters. 2. If you recognize it as meaningful English, conclude that it must be designed. 3. Conclude that the comment was designed. The two procedures give exactly the same results, yet the second one doesn’t even include the dFSCI step. All the work was done by the other steps. The dFSCI step was a waste of time, mere window dressing. Even your own test procedure shows that dFSCI is useless, gpuccio.
Aren't you missing something? If you omit step #3 of Procedure 1 in Procedure 2, then step#3 in Procedure 2 is completely meaningless. The whole point of gpuccio's "procedure" is to compare the recognition of "design" that is naturally made with the use of a particular language, and the values that are generated using dFSCI. Shouldn't that be clear to you? Gpuccio's DFCSI isn't useless, your Procedure 2 is useless. PaV
gpuccio Another OT: https://uncommondesc.wpengine.com/evolution/a-third-way-of-evolution/#comment-527403 Dionisio
#106 Adapa This is for you too:
https://uncommondesc.wpengine.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/#comment-527392
Dionisio
#96 fifthmonarchyman
For example I’m working on a method to evaluate the strength of forecasting models at my place of employment.
That sounds interesting. Dionisio
keith s Exactly. The calculation is completely unnecessary. The purpose of a dFSCI calculation is not to convince anyone in the scientific community of its design detection worth. The purpose of a dFSCI calculation is merely for gpuccio to convince himself he was specially created by his loving God. Adapa
#103 mullerpr Interesting commentary. Thank you. BTW, I could not open the link you provided. Dionisio
#100 Me_Think Does this link answer your question?: https://uncommondesc.wpengine.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/#comment-527381 Now, can you answer the questions in this link?: https://uncommondesc.wpengine.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/#comment-527389 Thank you. Dionisio
Zachriel, Natural processes flowing from the uniformity of classical mechanics … Is that really going to be presented as an analogue for Natural Selection? I don't think you saw the critical questions I asked. I like physical necessity when it comes to things like orbital paths for planets, chemical bonds in minerals, mechanical action etc. But the patterns created by necessity is by definition bad at carrying information new. If there is no degree of freedom there is no information carrying capacity. The clear distinction between life and the uniformity of physical processes convinced Hubert Yockey that the definition of life is informational in contrast to non-informational physical processes… The first biological information he concluded is an axiomatic concept not explained by natural processes. Information Theory, Evolution, and The Origin of Life http://www.amazon.com/gp/aw/d/.....ot_redir=1 P.S. I would like you to discuss any sieve design with a minaral processing engineer, and he/she will tell you how symmerty in the behaviour of nature do not descriminate they way his/her processing plant does. mullerpr
keith s Why do you want to see a calculation? Is that important to you? Why? If an example is given, would you ask for another? If ten examples are provided, would you demand eleven? Is it possible, like someone suggested today, that you were hired by this blog to write what you write, in order to provoke certain folks to keep heated arguments, hence increase the number of posts in the discussion threads and increase the traffic in the blog? :) Dionisio
Me_Think:
The question is: do I need to calculate dFSCI to see if sonnet is designed? The answer is :no. I can see sonnet is designed without the need to calculate, so I am not sure what is being achieved here.
Exactly. The calculation is completely unnecessary. keith s
The question is: do I need to calculate dFSCI to see if sonnet is designed? The answer is :no. I can see sonnet is designed without the need to calculate, so I am not sure what is being achieved here. Me_Think
KF:
We don’t actually need to quantify to recognise, but we can quantify and the result is the quantification helps us see how hard it is for the atomic and temporal resources of the observed cosmos to arise beyond sparse search of very large config spaces implied by the possible arrangements of parts vs the tight configurational constraints implied by needs of interactive, specific functional organisation. KF
GP:
https://uncommondesc.wpengine.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/#comment-527189
Dionisio
Sorry -- that should be a period, not a question mark, at the end of the quote. keith s
FMM:
Well I guess I will rest my case then.
What case? Did you make an argument?
I can think of all kinds of useful purposes [for the calculation of dFSCI]?
Please share some of them! Gpuccio hasn't been able to come up with any, and I'm sure he'd be grateful. keith s
Well I guess I will rest my case then. I can think of all kinds of useful purposes. For example I'm working on a method to evaluate the strength of forecasting models at my place of employment. The more "CSI" found in the actual data the weaker the model will be. Peace fifthmonarchyman
FMM, This thread is about dFSCI. I'm not interested in your proposed digression. I would like to see an example in which dFSCI actually serves a useful purpose. Can you think of one? Gpuccio seems unable to. keith s
Keith's Again we understand you think this is all a waste of time How about this? produce an algorithm capable of producing a 600-character English text independently with out smuggling information through the back door. Call it what ever you want. feel free to disregard the calculation Do you think such an algorithm is even possible? What would convince you that it is not? peace fifthmonarchyman
FMM:
Now why not humor us and point us to an operation that combines random and algorithmic processes that is capable of giving a false positive in gpuccio’s Turing test.
Because what you are calling "gpuccio's Turing test" isn't a test of dFSCI at all. Here's what the GTT boils down to: 1. Present a 600-character text to gpuccio. 2. If gpuccio recognizes it as meaningful English, then conclude that the text was designed. The dFSCI calculation isn't required. It accomplishes nothing. I keep asking gpuccio for an example in which dFSCI actually does something useful, but he can't come up with one. keith s
Keith's said In other words, you assume design if gpuccio is not aware of an explicit algorithm capable of producing the sequence. I say, I'm not sure if gpuccio is willing to go this far but I would say that there are certain sequences that algorithms are mathematically incapable of producing. That means any possible algorithm. would you disagree with this claim? peace fifthmonarchyman
"I have just done it for ATP synthase" I don't think you have. Ignoring other significant issues (modelling evolution from a precursor as a random goal oriented search), you've simply no idea what fraction of sequence space gives a functional ATP synthase. You guess by aligning three sequences. 1) Nice cheat on the Archaeal sequence--using the one with maximum identity to the others. It is thought to be acquired through horizontal gene transfer. True archaeal ATP synthases have far less identity, so knock it off already with these silly 50% or whatever identity across all life #s. 2) Evolution hits on solutions, and get stuck in local optima. Rubisco is possibly the worst enzyme ever, but there it is, roughly the same in all plants. Designers (human) have already worked around it. That a sequence is conserved in evolution in NO way indicates it is the only solution in sequence space. Just a contingent solution that has persisted. 3) Despite this, some ATP synthase lineages have diversified. Plug an ATP synthase from apicomplexia into your alignment. What is that? It doesn't align at all, save some topologies and one key arginine? So how many bits is that??? Hmm... http://www.ncbi.nlm.nih.gov/pubmed/9425287?dopt=Abstract http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2881411/ http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1000418 http://www.nature.com/nature/journal/v513/n7519/full/nature13776.html REC
We get it Keiths, you think this is all a waste of time. understood we read you loud and clear Now why not humor us and point us to an operation that combines random and algorithmic processes that is capable of giving a false positive in gpuccio's Turing test. It does not have to be an evolutionary algorithm. It does not even have to include a random component any algorithm will do. I promise once you do this small thing we will get to the nitty gritty of explaining why we think this is so important peace fifthmonarchyman
drc466:
1) Despite your admiration, natural selection serves as a subtractive force in a search – it reduces the number of spaces searched.
Yes, and that's a good thing. Maximizing the amount of space searched is not the "goal". Searches have a cost.
It doesn’t directly affect either the target space, or the search space, numerically – it simply reduces the number of tries...
Evolution does not seek out specific targets. It isn't "trying" to find the flagellum, or binocular vision, or opposable thumbs. If it stumbles on something good, whatever that happens to be, it keeps it. If it stumbles on something bad, whatever that happens to be, it tosses it. (The above neglects drift, of course. With drift, beneficial mutations can sometimes be lost and deleterious mutations can sometimes be fixed.) When you define a target space in terms of a specific function, as gpuccio does, you are making a huge mistake, because evolution is not seeking that specific target. It is seeking anything that improves fitness. Gpuccio compounds the error by taking the ratio of the target space to the entire search space. That makes the dFSCI number useless for anything other than a purely random search. Evolution is not a purely random search. It includes selection. Why waste time calculating a number that neglects selection?
2) “If you recognize it as meaningful English, conclude that it must be designed have function.” When you fix this glaring error in your “logic”, it is obvious you have completely mistated the issue. The process is detect function/specificity, calculate complexity, determine design – not detect design, calculate complexity, determine design.
No, the result of the calculation simply tells us that the sequence in question could not have come about by a purely random search. We knew that already, so the calculation is pointless. All of the work gets done by the other, boolean component of dFSCI -- not the numerical value. And the boolean component of dFSCI boils down to what I described earlier:
In other words, you assume design if gpuccio is not aware of an explicit algorithm capable of producing the sequence. This is the worst kind of Designer of the Gaps reasoning. It boils down to this: “If gpuccio isn’t aware of a non-design explanation, it must be designed!”
The calculation and the result -- the number of bits of dFSCI -- are pure window dressing. They are designed to look mathy and sciencey, but they have no actual value and can be completely dispensed with. keith s
I guess so :) Collin
Collin:
Puccini, thanks. It’s clearer.
Puccini? Is that your auto-correct talking? keith s
Puccini, thanks. It's clearer. Collin
gpuccio, You failed to provide a way to measure "meaning" whatever that means. Mung
Reality:
I don’t think that Collin’s questions are misguided. They are good, relevant questions that deserve a good, relevant response.
Why do you think they are they good questions? Why do you think they are relevant? Mung
centrestream:
A little less condescension and a little more civil discourse would be appropriate.
Civil discourse requires honesty. Mung
gpuccio, I want to think about your response and will get back to you later. Reality
gpuccio, you said, if you define the function differently, for example any sequence of that length which has good meaning in English, things change. I say, exactly!!!!!! Think of this measure as having 2 axises. The X axis is the lengthen of the sequence and the y axis is the "meaning threshold" I'm evaluating . The lower on the y axis you are the longer the string needs to be for me to infer design. For some one familiar with this debate "me thinks it's a weasel" is loaded with meaning. For an average Joe it might take a whole sonnet to pass the threshold. On the other hand if I were looking at a string of text in Chinese it might take a string the length of a whole play to pass the tet because I would be looking for mere arbitrary structure and grammar as apposed English words. But even in that case I would be able to give the sequence a real objective value and compare it to strings that were the result of combination of algorithmic and random processes peace fifthmonarchyman
Zachriel: I agree with that. I remember your softwares. gpuccio
gpuccio: Algorithmic oracles can only recycle frozen meaning. If you can't objectively judge the meaning of a phrase or sonnet, then you're fairly well stuck. However, we can certainly evolve sequences of words as words are somewhat frozen by convention. Grammar, as well. This gives us strings of words, which would seemingly have more meaning than random letters. Zachriel
Zachriel: You have always been elegant in your skirmishes. That's why I like you! :) gpuccio
gpuccio: The the 200,000 word dictionary, for example, is rather complex as an oracle. About 10^7 bits. gpuccio: I agree with you: with the appropriate oracle, you can do anything. There has to be some reasonable continuity in relative reward or it won't work. gpuccio: Algorithmic oracles can only recycle frozen meaning. If you define it so it can't exist, then sure. Zachriel
Roy: Don't be fastidious. I took the number from the Internet. OK, I have redone the computation for 500,000 words. Is that enough for you? The dFSCI is now 673 bits. You can check yourself. And do you realize how much I am underestimating the dFSCI when I take the total number of sequences made by English words as target space, instead of taking the total number of sequences which have good meaning in English? So, don't be fastidious. gpuccio
Zachriel: Nice to hear from you. I agree with you: with the appropriate oracle, you can do anything. And so? The the 200,000 word dictionary, for example, is rather complex as an oracle. What an algorithmic oracle can never do is to generate new complex meaning because it understands that meaning. Algorithmic oracles can only recycle frozen meaning. Conscious oracles, instead, understand meaning. It's all another matter. Please, look at this interesting paper: Using Turing Oracles in Cognitive Models of Problem-Solving http://www.blythinstitute.org/images/data/attachments/0000/0041/bartlett1.pdf gpuccio
Reality: "According to Joe, CSI=dFSCI=FSC=FSCO/I. Do you agree with him?" My CSI=dFSCI=FSC=FSCO/I has detected a sock. You slick devil you. centrestream
a) There are about 200,000 words in English
Nowhere nearly enough. OED has more than that and Webster's 2nd has more than twice as many. And that's without including a similar number of place and personal names, all of which are valid in English text. Roy
Reality: Please. read post #37 here for the procedure. Please, read post #661 here: https://uncommondesc.wpengine.com/intelligent-design/evolution-driven-by-laws-not-random-mutations/ for the acronyms. gpuccio
Another reason why it is best to start with something like an English phrase rather than biology is that logically it should be much easier to-produce a false positive for a short sequence of letters than for a protein sequence. Again I would love to see the algorithm that can create a false positive here. Since we are dealing with text I see no reason such an algorithm could not be put together on a laptop with no special software. come on critics give it a go. peace fifthmonarchyman
mullerpr: Do you know many sieve like things in nature? Non-living nature if full of natural sieves. If not, then the Earth would be homogeneous, which it is not. Gold and salt are found concentrated in some places, water in others. Indeed, there are natural water pumps to replenish the headwaters of rivers so that the running water can continue to shape and sort the rocks. The movement of sun and moon and wind and surf make the sand on the beach. gpuccio: Nobody seems to object that he knows some simple algorithm which can write a passage of 600 characters which has good meaning in English. If you had an oracle which could return relative meaning, and if you consider "the king" to have more meaning than "king", the former being specific, then an evolutionary algorithm should be able to create long sequences of meaning. Perhaps you could have the snippets read to Elizabethan audiences, and rate them by applause. gpuccio: An attempt at computing dFSCI for English language... I have no idea. The density of meaningful sonnets is an interesting question, but let's grant that Shakespeare's sonnets were the result of an intelligent mind. gpuccio: 2^2113 / 2^ 2944 = 2^-831 That's fine, but it can be shown that, given a suitable oracle, words can evolve from letters, and sentences from words. Indeed, if you feed the little genomes based on their iambic character, they'll evolve into iambs. But just start with words and the 200,000 word dictionary you so kindly provided for our oracle. As the algorithm can generate sequences of words, that means your calculation becomes 2^2113 / 2^ 2113 = 1. We've already crossed a distance of 2^-831. Zachriel
fifthmonarchyman: I appreciate your insightful comment! About "“me thinks it’s a weasel", it depends. If you define the target as that specific phrase, its length is 23 characters, the search space is about 113 bits, and as there is only one sequence which satisfies the definition, the functional space is 1 and the functional complexity is -log2 of 1:2^113, again 113 bits. Not too much, not too little. For many systems and time spans, it would be enough to infer design. After all, 10^34 is a big number. But if you define the function differently, for example any sequence of that length which has good meaning in English, things change. Applying the method I have used, which is probably less precise for short sequences, the functional complexity is about 25 bits. IOWs there are about 3 probabilities in 100,000,000 to get a positive result. Quite in the range of many random systems. gpuccio
gpuccio, you said: "Design detection by dFSCI is a procedure with 100% specificity and low sensitivity. It has no false positives, and many false negatives." If that's true it's only because dFSCI is a useless term that is used by IDists to make it look as though scientific methods are being used to detect design, even though the alleged design detection pertains only to things that are already known to be designed. On another note, you said: "Design detection by dFSCI is a procedure...". Design detection by doing what with dFCSI? According to Joe, CSI=dFSCI=FSC=FSCO/I. Do you agree with him? Reality
Reality: I have just done it for ATP synthase. Look at post #27 here. You may perhaps understand that the specificity of the procedure must be tested with objects of which we can assess independently the origin, and then be applied to objects whose origin is controversial. So, this discussion about language is important. You may perhaps understand that an elephant, a cancer cell, and a galaxy cluster are not digital sequences. So, I prefer to apply the procedure to proteins. Do you agree that it is a relevant application? gpuccio
Mung, what "ID theory" are you referring to? I don't think that Collin's questions are misguided. They are good, relevant questions that deserve a good, relevant response. Reality
Mung: "Collin @ 56. I take it you understand NOTHING about ID theory. Nothing. Would you at least cop to that before I put out the effort to answer your oh so misguided questions?" A little less condescension and a little more civil discourse would be appropriate. Now that you have been corrected, could you please explain why Collin's question is misguided? If it is the answer that I expect you to give, it will be based on a misguided understanding of evolution. Please, enlighten us. centrestream
Collin: Please, take the time to review my procedure reposted here by me at #37. Design detection by dFSCI is a procedure with 100% specificity and low sensitivity. It has no false positives, and many false negatives. The main reason for false negatives is that the observer cannot see the function and define it. So, in your example, if the text is in a language I don't know and I don't understand its meaning, I cannot define a function as "having good meaning in this language". So, I will not infer design, and that will be a false negative. False positives, to my best knowledge, don't exist. Unless someone here proposes one. So, if we infer design, we can be rather certain of our inference. Regarding information, I give no special meaning to the word: only what I have explicitly defined. Please, see my OP about that: https://uncommondesc.wpengine.com/intelligent-design/functional-information-defined/ The relevant part:
So, the general definitions: c) Specification. Given a well defined set of objects (the search space), we call “specification”, in relation to that set, any explicit objective rule that can divide the set in two non overlapping subsets: the “specified” subset (target space) and the “non specified” subset. IOWs, a specification is any well defined rule which generates a binary partition in a well defined set of objects. d) Functional Specification. It is a special form of specification (in the sense defined above), where the rule that specifies is of the following type: “The specified subset in this well defined set of objects includes all the objects in the set which can implement the following, well defined function…” . IOWs, a functional specification is any well defined rule which generates a binary partition in a well defined set of objects using a function defined as in a) and verifying if the functionality, defined as in b), is present in each object of the set. It should be clear that functional specification is a definite subset of specification. Other properties, different from function, can in principle be used to specify. But for our purposes we will stick to functional specification, as defined here. e) The ratio Target space/Search space expresses the probability of getting an object from the search space by one random search attempt, in a system where each object has the same probability of being found by a random search (that is, a system with an uniform probability of finding those objects). f) The Functionally Specified Information (FSI) in bits is simply –log2 of that number. Please, note that I imply no specific meaning of the word “information” here. We could call it any other way. What I mean is exactly what I have defined, and nothing more.
IOWs, FSI is only -log2 of the probability of finding the target space. It is a measure of the functional bits, the number of bits which are absolutely necessary to implement the function. More intuitively, it's the quantity of information necessary to implement the defined function. If you read the whole OP linked above, you may understand better my definitions. You say: "The information content in this sentence is not found separately in each word but in their associations. So I could write “red happens glory fishing diamond wrangler” and although each word has meaning, the phrase itself has none. Can that be calculated or determined somehow? Can an objective number be placed on it? 12 units of meaning?" No. If you follow with attention my reasoning in the OP of this thread, you will see that my functional definition is: any sequence of 600 characters which has good meaning in English. Therefore, for a sequence to be specified (to be part of the target space), the whole sequence must have good meaning in English. But, you may say that what I have computed as target space is the total number of combinations and permutations of English words in 600 characters. That's true. But I have done that only because so I have a higher threshold for the functional space (and therefore a lower threshold for the functional complexity). Why? Because the set of all sequences which have good meaning in English is certainly a small subset of the set of all sequences made by English words, and is included in it. That's why I say:
Now, the important concept: in that number are certainly included all the sequences of 600 characters which have good meaning in English. Indeed, it is difficult to imagine sequences that have good meaning in English and are not made of correct English words. And the important question: how many of those sequences have good meaning in English? I have no idea. But anyone will agree that it must be only a small subset. So, I believe that we can say that 2^2113 is a higher threshold for out target space of sequences of 600 characters which have a good meaning in English. And, certainly, a very generous higher threshold.
Clear? gpuccio
What gpuccio is trying to do here is demonstrate an objective Turing test. What the critic's algorithm needs to do is fool us into believing that it is intelligent. very interesting!!!!!! How about a simpler example to help those of us who struggle with the big numbers? If the "me thinks it's a weasel" program was shown to have not smuggled in it's information how many bits would it have produced? peace fifthmonarchyman
Centrestream, Sand on the beach... Is that really going to be presented as an analogue for Natural Selection? I like physical necessity when it comes to things like orbital paths for planets, chemical bonds in minerals, mechanical action etc. But the patterns created by necessity is by definition bad at carrying information new. If there is no degree of freedom there is no information carrying capacity. The clear distinction between life and the uniformity of physical processes convinced Hubert Yockey that the definition of life is informational in contrast to non-informational physical processes... The first biological information he concluded is an axiomatic concept not explained by natural processes. Information Theory, Evolution, and The Origin of Life http://www.amazon.com/gp/aw/d/0521169585?pc_redir=1414569767&robot_redir=1 mullerpr
gpuccio, Joe commented in this thread, and since Joe claims to know all about CSI I thought that everyone here could learn all about it by looking at his brilliant explanations on his blog. Besides, linking to other sites or bringing up what has been said by others in another thread is a common action by IDists here so I don't see why there should be any problem with my doing the same. Regarding your "computation", does your "computation" have anything to do with measuring, calculating, or computing CSI in anything other than English text which is already known to be designed? For example, can and will you please measure, calculate, or compute CSI in an elephant, a cancer cell, and a galaxy cluster? Thanks in advance. Reality
Collin @ 56. I take it you understand NOTHING about ID theory. Nothing. Would you at least cop to that before I put out the effort to answer your oh so misguided questions? Mung
Even Richard Dawkins believes in calculating design. His minimally designed phrase was "METHINKS IT IS LIKE A WEASEL." Not quite a sonnet. Mung
Bob O'H: "True. But how would you define the search space for an organism. For example, what’s the search space for (say) a strain of the ‘flu virus?" Of course, it's simpler to compute dFSCI for smaller items. Usually I apply it to proteins. See the example of ATP synthase. We could apply the concept to a whole genome, like that of a virus. The search space is not a big problem, because it can be defined as all possible nucleotide sequences of that length (4^n). But any computation of the target space will depend on the function we define for the object, and the target space can be very complex to analyze for big functional objects like a whole viral genome. For a protein, it is easier to define an appropriate function. Usually I prefer to stick to the "local" function, that is the biochemical activity. That is certainly the best solution for enzymes. gpuccio
gpuccio, Great OP. I Find it fascinating It is very similar to something I've been kicking around for comparing graphical representations of designed phenomena verses data resulting from a combination of random and algorithmic process. I too would very much like to see the evidence of false positives. Could a critic please link to an algorithm that yields positive number of bits using this calculation? If not could said critic provide evidence that such an algorithm is at least possible in theory. Once we have cleared that low hurdle we can begin the discussion of whether any of this is useful or at all relevant to biology, one thing at a time. Peace fifthmonarchyman
Let me ask something. If all english speakers died out and then chinese scientists discovered english texts, could they calculate its dFsci? Could they tell the meaningful from the gibberish? Also, what is meant by information in your calculation? The information content in this sentence is not found separately in each word but in their associations. So I could write "red happens glory fishing diamond wrangler" and although each word has meaning, the phrase itself has none. Can that be calculated or determined somehow? Can an objective number be placed on it? 12 units of meaning? Collin
gpuccio @51 -
1) It is possible to compute the target space, and therefore dFSCI, for specific search spaces by some reasonable, indirect method. Of course, each space should be analyzed with appropriate methods.
True. But how would you define the search space for an organism. For example, what's the search space for (say) a strain of the 'flu virus? Bob O'H
drc466: "gpuccio, I think there’s an issue with this: 2^2113 / 2^ 2944 = 2^831. IOWs, 831 bits. From a strictly mathematical sense, your ratio is inverted." Thank you! That is a stupid error. The ratio is correct (it's the ratio of the target space to the search space), but the result is wrong: it should be 2^-831. Then, the -log2 becomes 831 bits of functional complexity. Thank you really! That's exactly what I needed. I will immediately correct the OP. If you find any other error, please tell me. gpuccio
Dionisio: "Got to find how to submit my application." Encrypted, of course! :) gpuccio
#46 Reality It would be appreciated if "off topic" commentaries are explicitly labeled as OT (for example see post 36). Thus the readers can skip the post when they see the label 'OT' at the beginning of the comment. BTW, are you out of touch with the meaning of your pseudonym? :) Oops! Just realized I forgot to mark posts 45 and 49 as OT. My fault. Do as I say, not as I do. :) Dionisio
Friends: I am honored of the many comments, but still I would like to outline a few points in the OP which could be of interest, is someone wants to consider them (keith is exonerated, I don't want him to waste time in irrelevant things, when he has to work hard to reposting here). So, if the computation here is correct, a few interesting things ensue: 1) It is possible to compute the target space, and therefore dFSCI, for specific search spaces by some reasonable, indirect method. Of course, each space should be analyzed with appropriate methods. 2) Nobody seems to object that he knows some simple algorithm which can write a passage of 600 characters which has good meaning in English. Where are all those objections about how difficult it is to exclude necessity, and about how that generates circularity, and about how that is bound to generate many false positives? The balance at present: a) Algorithms proposed to explain Shakespeare's sonnet (or any other passage of the same length in good English): none. b) False positives proposed: none. c) True positives found: a lot. For example, all the posts in the last thread that were longer than 600 characters (there were a few). 3) We have a clear example that functional complexity, at least in the language space, is bound to increase hugely with the increase in length of the string. This is IMO an important result, very intuitive, but now we have a mathematical verification. Moreover, while the above reasoning is about language, I believe that it is possible in principle to demonstrate it also for other functional spaces, like software and proteins. Any comments? Maybe there is some room left in the intervals between one of keith's reposts and the following. :) gpuccio
Keith s, 1) Despite your admiration, natural selection serves as a subtractive force in a search - it reduces the number of spaces searched. It doesn't directly affect either the target space, or the search space, numerically - it simply reduces the number of tries (think of it as rolling 10 dice, and then removing all the 2's and 3's - you've reduced your ability to hit the target if the target requires 2's and 3's). Natural selection makes it harder for evolution to get a good result, not easier. One reason for the ready acceptance of neutral theory is to improve the odds hurt by NS. 2) "If you recognize it as meaningful English, conclude that it must be designed have function." When you fix this glaring error in your "logic", it is obvious you have completely mistated the issue. The process is detect function/specificity, calculate complexity, determine design - not detect design, calculate complexity, determine design. It is certainly possible to detect function (e.g. computer generates "sky is blue") without design (search was random). Your logic fails. gpuccio, I think there's an issue with this:
2^2113 / 2^ 2944 = 2^831. IOWs, 831 bits.
From a strictly mathematical sense, your ratio is inverted. drc466
20 jerry GP wrote that at least one of them is not a very expensive double agent? I wonder how much the blog pays them? I could use a few bucks now and then... maybe this is one of the 'make easy money online' ads I've seen out there? perhaps if I practice writing nonsense and asking senseless questions I could pretend being one of those guys and get hired by this blog as another anti-ID double agent? Got to find how to submit my application. Do they require a CV or resumè too? probably no photo ID or any other ID required, because they hire anti-ID pretenders. :) Dionisio
Reality: Was that a comment on my computation? If it is, it's very subtle. gpuccio
Mullerpr@33: "Who made rhe sieve that can size particles? Do you know many sieve like things in nature? How does size distribution become information?" Have you ever walked on a beach? That is only possible because of non-designed sieve-like thing. centrestream
In these and other posts and comments on his blog, Joe explains how to measure CSI. kairosfocus, Barry, and other IDists, gaze upon the brilliant words of your fellow traveler and ilk (just two of kairosfocus's favorite attack terms when he constantly lumps, slanders, and falsely accuses "evomats", atheists, agnostics, scientists, alleged "enablers", anti-ID blogs that he calls "fever swamps", etc., etc., etc.: http://intelligentreasoning.blogspot.com/2009/03/measuring-information-specified.html http://intelligentreasoning.blogspot.com/2014/04/measuring-csi-in-biology-repost.html There's more here: http://intelligentreasoning.blogspot.com/ Reality
20 Jerry
I often wonder that the hostility and inanity of most of the anti-ID people is due to that they may be double agents and produce incoherent irrelevant comments to make the pro-ID people look good. Or that they are mindless and egged on by someone who is a double agent.
Sometimes I've thought of that too. The irrational nature of the anti-ID attacks and the clueless commentaries of the 'n-D e' folks make me think those guys are paid double agents just pretending. Who knows? Maybe it's true? It would be disappointing to discover they use this tricky tactic in this blog. That's why I try hard to avoid falling into the traps of their senseless arguments, but sometimes can't resist the temptation to getting involved in the discussions too. My bad. Fortunately often my comments are completely ignored by most commenters, hence I don't last long in those discussions threads. :) Dionisio
gpuccio,
Why do you both repost and link to the original?
So that readers can see the comment in its original context, if they desire. keith s
gpuccio,
You repost a post where you say: “Yet both KF and gpuccio admit that you don’t even need to do the calculation.” as a comment to an OP where I have done the calculation?
Of course. That's my point. As I said above:
A correct computation of an irrelevant number is still irrelevant, so it doesn’t matter whether the computation is correct. Evolution includes selection, and your number fails to take selection into account.
keith s
5for, Here is my response to the second part of gpuccio's #37: gpuccio, to Learned Hand:
I will explain what is “simple, beautiful and consistent” about CSI. It is the concept that there is an objective complexity which can be linked to a specification, and that high values of that complexity are a mark of a design origin.
gpuccio, That is true for Dembski’s CSI, but not your dFSCI. And as I pointed out above, Dembski’s CSI requires knowing the value of P(T|H), which he cannot calculate. And even if he could calculate it, his argument would be circular. Your “solution” makes the numerical value calculable, at the expense of rendering it irrelevant. That’s a pretty steep price to pay.
There are indeed different approaches to a formal definition of CSI and of how to compute it,
Different and incommensurable.
a) I define a specification as any explicit rule which generates a binary partition in a search space, so that we can identify a target space from the rest of objects in the search space.
Which is already a problem, because evolution does not seek out predefined targets. It takes what it stumbles upon, regardless of the “specification”, as long as fitness isn’t compromised.
b) I define a special subset of SI: FSI. IOWs, of all possible types of specification I choose those where the partition is generated by the definition of a function. c) I define a subset of FSI: those objects exhibiting digital information. d) I define dFSI the -log2 of the ratio of the target space / the search space.
This is why the numerical value of dFSCI is irrelevant. Evolution isn’t searching for that specific target, and even if it were, it doesn’t work by random mutation without selection. By omitting selection, you’ve made the dFSCI value useless.
e) I categorize the value of dFSI according to an appropriate threshold (for the system and object I am evaluating, see later). If the dFSI is higher than the threshold, I say that the object exhibits dFSCI (see later for the evaluation of necessity algorithms) To infer design for an object, the procedure is as follows: a) I observe an object, which has its origin in a system and in a certain time span. b) I observe that the configuration of the object can be read as a digital sequence. c) If I can imagine that the object with its sequence can be used to implement a function, I define that function explicitly, and give a method to objectively evaluate its presence or absence in any sequence of the same type. d) I can define any function I like for the object, including different functions for the same object. Maybe I can’t find any function for the object. e) Once I have defined a function which is implemented by the object, I define the search space (usually all the possible sequences of the same length). f) I compute, or approximate, as much as possible, the target space, and therefore the target space/search space ratio, and take -log2 of that. This is the dFSI of the sequence for that function. h) I consider if the sequence has any detectable form of regularity, and if any known explicit algorithm available in the system can explain the sequence. The important point here is: there is no need to exclude that some algorithm can logically exist that will be one day found, and so on. All that has no relevance. My procedure is an empiric procedure. If an algorithmic explanation is available, that’s fine. If no one is available, I go on with my procedure.
Which immediately makes the judgment subjective and dependent on your state of knowledge at the time. So much for objectivity.
i) I consider the system, the time span, and therefore the probabilistic resources of the system (the total number of states that the system can reach by RV in the time span). So I define a threshold of complexity that makes the emergence by RV in the system and in the time span of a sequence of the target space an extremely unlikely event. For the whole universe, Dembski’s UPB of 500 bits is a fine threshold. For biological proteins on our planet, I have proposed 150 bits (after a gross calculation). Again, this is useless because nobody thinks that complicated structures or sequences come into being by pure random variation. It’s a numerical straw man. l) If the functional complexity of the sequence I observe is higher than the threshold (IOWs, if the sequence exhibits dFSCI), and if I am aware of no explicit algorithm available in the system which can explain the sequence, then I infer a design origin for the object. IOWs, I infer that the specific configuration which implements that function originated form a conscious representation and a conscious intentional output of information form a designer to the object.
In other words, you assume design if gpuccio is not aware of an explicit algorithm capable of producing the sequence. This is the worst kind of Designer of the Gaps reasoning. It boils down to this: “If gpuccio isn’t aware of a non-design explanation, it must be designed!” keith s
keith: Why do you both repost and link to the original? Is that functional redundancy? A secret aspiration to robustness? An attempt to reach an atemporal singularity? gpuccio
Dionisio: Thank you, as always. :) gpuccio
keith: You are really beyond comprehension. You repost a post where you say: "Yet both KF and gpuccio admit that you don’t even need to do the calculation." as a comment to an OP where I have done the calculation? I will never understand you! gpuccio
Another comment worth reposting: Learned Hand, We’ve tumbled into a world where Logic is not spoken. KF and gpuccio claim that FSCO/I are dFSCI are useful. Gpuccio suggested a test procedure to prove this. Yet both KF and gpuccio admit that you don’t even need to do the calculation. It reveals absolutely nothing that you didn’t already know. Why would anyone bother? Gpuccio, can you come up with a test procedure in which dFSCI actually does something useful, for a change? It’s pretty clear why you and KF don’t submit papers on this stuff. Even an ID-friendly journal would probably reject it, unless they were truly desperate. keith s
5for: From the other thread:
Me_Think at #644: “gpuccio explained that dFSCI doesn’t detect design, only confirms if a design is real design or apparent design.” I don’t understand what you mean. dFSCI is essential to distinguish between true design and apparent design, therefore it is an essential part of scientific design detection. If you are not able to distinguish between true design and apparent design, you are making no design detection you are only making recognition of the appearance of design, which is not a scientific procedure because it has a lot of false positives and a lot of false negatives. So, just recognition of the appearance of design is not scientific design detection. On the contrary, dFSCI eliminates the false positives, and design detection becomes a scientific reality. Therefore, dFSCI is an essential part of scientific design detection. Surely you can understand such a simple concept, can you?
And from another post:
Learned Hand: I will explain what is “simple, beautiful and consistent” about CSI. It is the concept that there is an objective complexity which can be linked to a specification, and that high values of that complexity are a mark of a design origin. This is true, simple and beautiful. It is the only objective example of something which can only derive from a conscious intentional cognitive process. There are indeed different approaches to a formal definition of CSI and of how to compute it, and of how to interpret the simple fact that it is a mark of design. I have tried to detail my personal approach, mainly by answering the many objections of my kind interlocutors. And yes, there are slight differences between my approach and, for example, Dembski’s, especially after the F. My approach is essentially a completely pragmatic formulation of the EF. In brief. a) I define a specification as any explicit rule which generates a binary partition in a search space, so that we can identify a target space from the rest of objects in the search space. b) I define a special subset of SI: FSI. IOWs, of all possible types of specification I choose those where the partition is generated by the definition of a function. c) I define a subset of FSI: those objects exhibiting digital information. d) I define dFSI the -log2 of the ratio of the target space / the search space. e) I categorize the value of dFSI according to an appropriate threshold (for the system and object I am evaluating, see later). If the dFSI is higher than the threshold, I say that the object exhibits dFSCI (see later for the evaluation of necessity algorithms) To infer design for an object, the procedure is as follows: a) I observe an object, which has its origin in a system and in a certain time span. b) I observe that the configuration of the object can be read as a digital sequence. c) If I can imagine that the object with its sequence can be used to implement a function, I define that function explicitly, and give a method to objectively evaluate its presence or absence in any sequence of the same type. d) I can define any function I like for the object, including different functions for the same object. Maybe I can’t find any function for the object. e) Once I have defined a function which is implemented by the object, I define the search space (usually all the possible sequences of the same length). f) I compute, or approximate, as much as possible, the target space, and therefore the target space/search space ratio, and take -log2 of that. This is the dFSI of the sequence for that function. h) I consider if the sequence has any detectable form of regularity, and if any known explicit algorithm available in the system can explain the sequence. The important point here is: there is no need to exclude that some algorithm can logically exist that will be one day found, and so on. All that has no relevance. My procedure is an empiric procedure. If an algorithmic explanation is available, that’s fine. If no one is available, I go on with my procedure. i) I consider the system, the time span, and therefore the probabilistic resources of the system (the total number of states that the system can reach by RV in the time span). So I define a threshold of complexity that makes the emergence by RV in the system and in the time span of a sequence of the target space an extremely unlikely event. For the whole universe, Dembski’s UPB of 500 bits is a fine threshold. For biological proteins on our planet, I have proposed 150 bits (after a gross calculation). l) If the functional complexity of the sequence I observe is higher than the threshold (IOWs, if the sequence exhibits dFSCI), and if I am aware of no explicit algorithm available in the system which can explain the sequence, then I infer a design origin for the object. IOWs, I infer that the specific configuration which implements that function originated form a conscious representation and a conscious intentional output of information form a designer to the object. m) Why? This is the important point. This is not a logical deduction. The procedure is empirical. It can be applied as it has been described. The simple fact is that, if applied to any object whose origin is independently known (IOWs, we can know if it was designed or not, so we use it to test the procedure and see if the inference will be correct) it has 100% specificity and low sensitivity. IOWs, there are no false positives. IOWs, there is no object in the universe (of which we can know the origin independently) for which we would infer design by this procedure and be wrong. Now, I will do a quick test. There are 560 posts in this thread. While I know independently that they are designed things, for a lot of reasons, I state here that any post here longer than 600 characters, and with good meaning in English, is designed. And I challenge you to offer any list of characters longer than 600, as many as you like, where you can mix two types of sequences: some are true posts in good English, with a clear meaning, taken from any blog you like. Others will be random lists of characters, generated by a true random character generator software. Well, hear me! I will recognize all the true designed posts, and I will never make a falsely positive design inference for any of the other lists. Now, you can try any trick. You can add posts in languages that I don’t know. You can add encryption of true posts that I will not recognize. Whatever you like. I will not recognize their meaning, and I will not infer design. They will be false negatives. You know, my procedure has low sensitivity. However, I will infer design for all the posts which have good meaning in English, and I will be right. And I will never infer design for a sequence which is the result of a random character generator. What about algorithms? Well, you can use any algorithm you like, but without adding any information about what has good meaning in English. IOWs, you cannot use the Weasel algorithm, where the outcome is already in the system. You cannot use an English dictionary, least of all a syntax correction software. Again, that would be recycling functional information, not generating it. But you can use an algorithm which generates sequences according to the Fibonacci series, if you like. Or an algorithm which takes a random character and generates lists with 600 same characters. Whatever you like. Because I am not using order as a form of specification. I am using meaning. And meaning cannot be generated by necessity algorithms. So, if I see a sequence of 600 A, I will not infer design for it. But for a Shakespeare sonnet I will. This is a challenge. My procedure works. It works not because is is a logical theorem. Not because I have hidden some keithian circularity in it (why should a circular procedure work, at all?). It works because we can empirically verify that it works. IOWs there could be sequences which are not designed, ad which are not obvious results of an algorithm, and which have high functional information. There could be. It is not logically impossible. But none of those sequences is known. They simply don’t exist. In the known universe, of all the objects of which we know the origin, only designed object will be inferred as designed by the application of my procedure. Again, falsify this statement if you can. Offer one false positive. One. Except for… Except, obviously, for biological objects. They are the only known objects in the universe which exhibit dFSCI, tons of it, and of which we don’t know the origin. But that is exactly the point. We don’t know their origin. But they exhibit dFSCI. In tons. So, I infer design for them (or at least, for those which certainly exhibit dFSCI). Is any algorithm known explicitly which could explain the functional information, say, in ATP synthase? No. There is nothing like that. There is the RV + NS. But it cannot explain that. Not explicitly. Only dogma supports that kind of explanation. The simple fact is: both complex language and complex function never derive from simple necessity algorithms. You cannot write a Shakespeare sonnet by a simple mathematical formula. You cannot find the sequence of ATP synthase by a simple algorithm. Maybe we could do it by a very complex algorithmic search, which includes all our knowledge of biochemistry, present and future, and supreme computational resources. We are still very distant from that achievement. And the procedure would be infinitely more complex than the outcome, and it would require constant conscious cognition (design). Well, I have not been so brief, after all. Now, if there are parts of my reasoning which are not clear enough, just ask. I am here. Or, if you just want to falsify my empirical procedure, offer a false positive. I am here. More likely, you can simply join keith in the group of the denialists. But at least, you will know more now of what you are denying.
I apologize for answering by quoting answers to others, but really I cannot follow a crowd of people who ask the same things. My main purpose here was to verify the computation with the help, or the criticism, of all. gpuccio
gpuccio OT: Sorry to post this OT link in your new OP, but I thought you would like to check it out - see two consecutive posts in this link: https://uncommondesc.wpengine.com/evolution/a-third-way-of-evolution/#comment-527182 Dionisio
Hi gpuccio I am afraid I can't comment on the calculation as my maths is not good enough, but I do wonder about the point of it. I had always thought dFSCI and its variants were a tool to detect design. But I think I remember you saying that is not the case, but dFSCI can be identified in all strings that we know are designed. So my question is where do you go from there? Lets's say you are right and you have discovered that this "thing" is present in all passages of recognisable text. What do we then use this finding to do? What can we achieve with it? (Given that we can't use it to analyse a passage of unrecognisable text (let alone a flagellum) to determine whether it was designed or not). 5for
What's the point? ID critics can't even manage to admit that their own posts here at UD are intelligently designed. I think perhaps it's time for us to consider seriously the idea that they aren't. Mung
keith s, are you kidding me? You said: "That’s as silly as asking “How does an unintelligent sieve know how to sort particles non-randomly by size?” Who made rhe sieve that can size particles? Do you know many sieve like things in nature? How does size distribution become information? You really don't think your thoughts through do you, keith? mullerpr
gpuccio:
NS can do almost nothing. Don’t believe the neo darwinian propaganda. They have nothing.
Baghdad Bob:
There are no American infidels in Baghdad. Never!
keith s
gpuccio,
Any comments on the computation itself?
A correct computation of an irrelevant number is still irrelevant, so it doesn't matter whether the computation is correct. Evolution includes selection, and your number fails to take selection into account. keith s
Gpuccio, I am no biologist, but reading as much as I can from James Shapiro's Evolution: A view from the 21st century, made it abundantly clear that genetic variation is far more complex and system driven than ever before realised. It seems as if the only gaps being filled are the ones cased by Darwinian ignorance. So sad to see their treasured dogma creating an explanatory vacuum in their mind... It must be painful, not to be able to move forward in science. mullerpr
True story about them not having anything.... Still waiting for Keith S to explain how unguided evolution built multiple stability control mechanism in cells..... Nothing yet Andre
Any comments on the computation itself? gpuccio
mullerpr: NS can do almost nothing. Don't believe the neo darwinian propaganda. They have nothing. At the biochemical level, where an enzyme is needed, or a wonderful biological machine like ATP synthase, NS is powerless. I have challenged anyone here to offer even the start of an explanation for just two subunits of ATP synthase, alpha and beta. Look here, point 3: https://uncommondesc.wpengine.com/intelligent-design/four-fallacies-evolutionists-make-when-arguing-about-biological-function-part-1/ Together, the two chains are 553 + 529 = 1082 AAs long. That is a search space of 4676 bits, greater than the Shakespeare sonnet. Together, they present 378 perfectly conserved aminoacid positions from LUCA to humans, which point to a target space of at least 1633 bits, probably greater than the Shakespeare sonnet (we cannot say for certain, because we have only lower thresholds of complexity, 831 bits for the sonnet, 1633 for the molecule, but the molecule seems to win!). Interesting, isn't it? gpuccio
Mullerpr Keith S is the village dirt worshipper, he believes that dirt not only made itself but magically became alive all by itself... matter in Keith's opinion can create CSI and can build anything using unguided procesess, highly complicated engineering marvels anything poof into existence and nothing can do it a trillion times, than a designer.... You've really missed nothing..... Andre
keith s:
The organisms with the beneficial mutations are the ones that do best at surviving and reproducing.
That is too vague to be of any use. Joe
keith s:
And as I pointed out above, Dembski’s CSI requires knowing the value of P(T|H), which he cannot calculate.
That is incorrect as CSI does not appear in that paper. Also it is up to you and yours to provide "H" and you have failed to do so. Stop blaming us for your failures.
The dFSCI number reflects the probability that a given sequence was produced purely randomly, without selection.
What is there to select if nothing works until it is all together? Why can't keith s show that the addition of natural selection would change the calculation? AGAIN, CSI and dFSCI exist REGARDLESS of how they arose. The point of using them as intelligent design indicators is because every time we have observed them and knew the cause it has ALWAYS been via intelligent design. We have NEVER observed nature producing CSI nor dFSCI. There isn't anything in ID that prevents nature from producing CSI and there isn't anything in the equation that neglects natural selection. keith s is all fluff. Joe
mullerpr:
How would natural selection translate into a search algorithm with a non-random search capability of selecting benefit?
That's as silly as asking "How does an unintelligent sieve know how to sort particles non-randomly by size?" The organisms with the beneficial mutations are the ones that do best at surviving and reproducing. keith s
jerry: Now you found out! If you are interested, keith is not very expensive... :) gpuccio
keith s: "Yes. The computation is useless, for reasons that I explain in the comments I just reposted." Good. My only interest here is that the computation is correct. :) gpuccio
It may seem that I have payed you to increase the number of the comments in my OP.
I often wonder that the hostility and inanity of most of the anti-ID people is due to that they may be double agents and produce incoherent irrelevant comments to make the pro-ID people look good. Or that they are mindless and egged on by someone who is a double agent. jerry
mullerpr: "I think the issue is the input info required just to be able to search for English words cannot be discounted… It should at least be a dictionary full of words to be added as input to the search algorithm." You are perfectly right. That's what Dembski and Marks call "added information". The best is always Dawkins, with his magic algorithm which can find a phrase that it already knows! And if they had the whole English dictionary in the algorithm, still they couldn only easily find the subset of good words, but the task of finding the subset of the subset, passages with good meaning, would remain unsurmountable. And if they had vast catalogues of well formed sentences, they could only find those sentences which they have, or similar to them. Still, a 600 character passage of original meaning would be out of range. That's why no algorithm can generate original language: algorithms have no idea of what meaning is, they can only recycle passively the meanings that have been "frozen" in them. That's why dFSCI is a sure marker of design. Unfortunately for keith! :) (keith, I am still waiting for a false positive. You can use this thread, so my comments will increase even more...) gpuccio
gpuccio:
Old stuff.
Devastating stuff. Why should my criticisms change when your dFSCI concept hasn't?
Have you anything to say about this post?
Yes. It repeats the errors that I point out in the comments I just reposted.
Have you anything to say about the computation?
Yes. The computation is useless, for reasons that I explain in the comments I just reposted. keith s
How would natural selection translate into a search algorithm with a non-random search capability of selecting benefit? I suppose survival or "more" successful replication also has a "say" in this so called "almost stochastic" system of evolving flagellum/s. That looks like a very information rich search scenario to me. The information from the combined "environment & survival" system fascinates me most... Just how much and what kind of information must be available in that system? (I suspect Keith S, don't see it to be problematic, but at least Jerry Fodor sees it) http://www.amazon.com/What-Darwin-Wrong-Jerry-Fodor/dp/0374288798 Did I miss something? mullerpr
keith s: It may seem that I have payed you to increase the number of the comments in my OP. :) Good job! gpuccio
Reposting this one, also: gpuccio, to Learned Hand:
I will explain what is “simple, beautiful and consistent” about CSI. It is the concept that there is an objective complexity which can be linked to a specification, and that high values of that complexity are a mark of a design origin.
gpuccio, That is true for Dembski’s CSI, but not your dFSCI. And as I pointed out above, Dembski’s CSI requires knowing the value of P(T|H), which he cannot calculate. And even if he could calculate it, his argument would be circular. Your “solution” makes the numerical value calculable, at the expense of rendering it irrelevant. That’s a pretty steep price to pay. There are indeed different approaches to a formal definition of CSI and of how to compute it, Different and incommensurable. a) I define a specification as any explicit rule which generates a binary partition in a search space, so that we can identify a target space from the rest of objects in the search space. Which is already a problem, because evolution does not seek out predefined targets. It takes what it stumbles upon, regardless of the “specification”, as long as fitness isn’t compromised. b) I define a special subset of SI: FSI. IOWs, of all possible types of specification I choose those where the partition is generated by the definition of a function. c) I define a subset of FSI: those objects exhibiting digital information. d) I define dFSI the -log2 of the ratio of the target space / the search space. This is why the numerical value of dFSCI is irrelevant. Evolution isn’t searching for that specific target, and even if it were, it doesn’t work by random mutation without selection. By omitting selection, you’ve made the dFSCI value useless. e) I categorize the value of dFSI according to an appropriate threshold (for the system and object I am evaluating, see later). If the dFSI is higher than the threshold, I say that the object exhibits dFSCI (see later for the evaluation of necessity algorithms) To infer design for an object, the procedure is as follows: a) I observe an object, which has its origin in a system and in a certain time span. b) I observe that the configuration of the object can be read as a digital sequence. c) If I can imagine that the object with its sequence can be used to implement a function, I define that function explicitly, and give a method to objectively evaluate its presence or absence in any sequence of the same type. d) I can define any function I like for the object, including different functions for the same object. Maybe I can’t find any function for the object. e) Once I have defined a function which is implemented by the object, I define the search space (usually all the possible sequences of the same length). f) I compute, or approximate, as much as possible, the target space, and therefore the target space/search space ratio, and take -log2 of that. This is the dFSI of the sequence for that function. h) I consider if the sequence has any detectable form of regularity, and if any known explicit algorithm available in the system can explain the sequence. The important point here is: there is no need to exclude that some algorithm can logically exist that will be one day found, and so on. All that has no relevance. My procedure is an empiric procedure. If an algorithmic explanation is available, that’s fine. If no one is available, I go on with my procedure. Which immediately makes the judgment subjective and dependent on your state of knowledge at the time. So much for objectivity. i) I consider the system, the time span, and therefore the probabilistic resources of the system (the total number of states that the system can reach by RV in the time span). So I define a threshold of complexity that makes the emergence by RV in the system and in the time span of a sequence of the target space an extremely unlikely event. For the whole universe, Dembski’s UPB of 500 bits is a fine threshold. For biological proteins on our planet, I have proposed 150 bits (after a gross calculation). Again, this is useless because nobody thinks that complicated structures or sequences come into being by pure random variation. It’s a numerical straw man. l) If the functional complexity of the sequence I observe is higher than the threshold (IOWs, if the sequence exhibits dFSCI), and if I am aware of no explicit algorithm available in the system which can explain the sequence, then I infer a design origin for the object. IOWs, I infer that the specific configuration which implements that function originated form a conscious representation and a conscious intentional output of information form a designer to the object. In other words, you assume design if gpuccio is not aware of an explicit algorithm capable of producing the sequence. This is the worst kind of Designer of the Gaps reasoning. It boils down to this: “If gpuccio isn’t aware of a non-design explanation, it must be designed!” keith s
keith s: Old stuff. Have you anything to say about this post? Have you anything to say about the computation? gpuccio
Reposting another comment comparing the flaws of CSI, FSCO/I, and dFSCI: Learned Hand, to gpuccio:
Dembski made P(T|H), in one form or another, part of the CSI calculation for what seem like very good reasons. And I think you defended his concept as simple, rigorous, and consistent. But nevertheless you, KF, and Dembski all seem to be taking different approaches and calculating different things.
That’s right. Dembski’s problems are that 1) he can’t calculate P(T|H), because H encompasses “Darwinian and other material mechanisms”; and 2) his argument would be circular even if he could calculate it. KF’s problem is that although he claims to be using Dembski’s P(T|H), he actually isn’t, because he isn’t taking Darwinian and other material mechanisms into account. It’s painfully obvious in this thread, in which Elizabeth Liddle and I press KF on this problem and he squirms to avoid it. Gpuccio avoids KF’s problem by explicitly leaving Darwinian mechanisms out of the numerical calculation. However, that makes his numerical dFSCI value useless, as I explained above. And gpuccio’s dFSCI has a boolean component that does depend on the probability that a sequence or structure can be explained by “Darwinian and other material mechanisms”, so his argument is circular, like Dembski’s. All three concepts are fatally flawed and cannot be used to detect design. keith s
keith s, you are not a very critical thinker are you? What about your objection supports your assertions? Can you highlight it maybe? My search for an argument failed, but you seem to be convinced there is an argument. So, go for, it... What would it be? mullerpr
Another comment from that thread worth reposting here: gpuccio, We’ve been over this many times, but the problem with your dFSCI calculations is that the number they produce is useless. The dFSCI number reflects the probability that a given sequence was produced purely randomly, without selection. No evolutionary biologists thinks the flagellum (or any other complex structure) arose through a purely random process; everyone thinks selection was involved. By neglecting selection, your dFSCI number is answering a question that no one is asking. It’s useless. There is a second aspect of dFSCI that is a boolean (true/false) variable, but it depends on knowing beforehand whether or not the structure in question could have evolved. You can’t use dFSCI to show that something couldn’t have evolved, because you already need to know that it couldn’t have evolved before you attribute dFSCI to it. It’s hopelessly circular. What a mess. The numerical part of dFSCI is useless because it neglects selection, and the boolean part is also useless because the argument that employs it is circular. dFSCI is a fiasco. keith s
I think the issue is the input info required just to be able to seach for English words cannot be discounted... It should at least be a dictionary full of words to be added as input to the search algorithm. Did I miss something? ???? mullerpr
gpuccio, You're repeating your earlier mistakes. I already showed, in the other thread, that the dFSCI calculation is a complete waste of time:
gpuccio, We can use your very own test procedure to show that dFSCI is useless. Procedure 1: 1. Look at a comment longer than 600 characters. 2. If you recognize it as meaningful English, conclude that it must be designed. 3. Perform a pointless and irrelevant dFSCI calculation. 4. Conclude that the comment was designed. Procedure 2: 1. Look at a comment longer than 600 characters. 2. If you recognize it as meaningful English, conclude that it must be designed. 3. Conclude that the comment was designed. The two procedures give exactly the same results, yet the second one doesn’t even include the dFSCI step. All the work was done by the other steps. The dFSCI step was a waste of time, mere window dressing. Even your own test procedure shows that dFSCI is useless, gpuccio.
keith s
Tim: What do you mean? I am trying to compute the target space, that is the set of sequences that have good meaning in English. IOWs, sequences which are made of English words. If a sequence is made of other groupings of characters which are not English words, it will not have a good meaning in English and it will not be part of the target space. Did I miss something? gpuccio
On the consciousness/brain interface I also agree and find it very interesting that some serious science projects aim at finding the physical structure that can enstantiate a mind from the fundamental property of consciousness in nature... Allen Institute’s Christof Koch on Computer Consciousness | MIT Technology Review http://www.technologyreview.com/news/531146/what-it-will-take-for-computers-to-be-conscious/ This I also see as just another agreement that mind is not matter, and mind is the only known design capable entity. mullerpr
This is just for fun, but uh, er, hm, how did you skip from those 30 character options right up to words?
For a 600 character text, we can therefore assume an average number of words of 120
Shouldn't that be "120 groups of characters?" What I mean is it seems like there are many many more factors of strings of characters that are not good ol' words? Was this just one aspect of your kind, conservatism in the math, or did I miss something? Tim
mullerpr: The quantum level is certainly fundamental to understand conscious processes. But I think that it works as an interface between consciousness and the brain, a la Eccles. That's how conscious experiences and material events can exchange information without violating any physical law. That's how we design and, very likely, how the biological designer designed biological things. gpuccio
mullerpr: "I am aware that Penrose see his method as a only non-reductionist and non-algorithmic, but still materialist… However I think Penrose, Nagel and Dembski (and others) are independently closing in on a post-materialist and/or post-classical mechanics explanation of reality. This looks like the stuff of a scientific revolution!" It does! And it is. :) And ID theory has a very important role in that scenario. gpuccio
Thank you gpuccio, this is exactly the way I interpret the work of Penrose in this regards. I also see that more and more people consider consciousness from a non-materialistic, non-reductionist perspective. I actually read Thomas Nagel's "Mind and Cosmos" before "Being as Communion", and it was refreshing to see Dembski incorporating the proposed teleology from Nagel. Is see this as a far more rational metaphysics than naturalism or materialism. I am aware that Penrose see his method as a only non-reductionist and non-algorithmic, but still materialist... However I think Penrose, Nagel and Dembski (and others) are independently closing in on a post-materialist and/or post-classical mechanics explanation of reality. This looks like the stuff of a scientific revolution! mullerpr
mullerpr: I am a big fan of Penrose's argument, even if I don't necessarily agree with his proposed explanatory model for consciousness. You may also be interested in this paper: http://www.blythinstitute.org/images/data/attachments/0000/0041/bartlett1.pdf which explores similar concepts. In my opinion, consciousness is a primary reality, which has its laws and powers. Its fundamental ability to always be able to go to a "metalevel" in respect to its contents and representations, due to the transcendental nature of the "I", is the true explanation for Turing's theorem and its consequences, including Penrose's argument. The same is true for design: it is a product of consciousness, and that's the reason why it can easily generate dFSCI, while nothing else in the universe can. The workings of consciousness use the basic experiences of meaning (cognition), feeling (purpose) and free will. Design is the result of those experiences. dFSCI is the magic result of them. gpuccio
Has anyone discussed/considered Roger Penrose's criticism of algorithmic consciousness as presented in his works like "The Emperor's New Mind". There he use the incompleteness theorems to show that mental actions exceed the ability to account for consciousness, like the ability of Shakespeare for example, in terms of algorithmic search. He propose a non-algorithmic quantum effect. I am not qualified to give more than an interested lay person's perspective. I am reading William Dembski's "Being as Communion" and also don't see this proposal from Penrose and Hameroff being discussed. mullerpr

Leave a Reply