Uncommon Descent Serving The Intelligent Design Community

Lobbing a grenade into the Tetrapod Evolution picture

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

A year ago, Nature published an educational booklet with the title 15 Evolutionary gems (as a resource for the Darwin Bicentennial). Number 2 gem is Tiktaalik a well-preserved fish that has been widely acclaimed as documenting the transition from fish to tetrapod. Tiktaalik was an elpistostegalian fish: a large, shallow-water dwelling carnivore with tetrapod affinities yet possessing fins. Unfortunately, until Tiktaalik, most elpistostegids remains were poorly preserved fragments.

“In 2006, Edward Daeschler and his colleagues described spectacularly well preserved fossils of an elpistostegid known as Tiktaalik that allow us to build up a good picture of an aquatic predator with distinct similarities to tetrapods – from its flexible neck, to its very limb-like fin structure. The discovery and painstaking analysis of Tiktaalik illuminates the stage before tetrapods evolved, and shows how the fossil record throws up surprises, albeit ones that are entirely compatible with evolutionary thinking.”

Just when everyone thought that a consensus had emerged, a new fossil find is reported – throwing everything into the melting pot (again!). Trackways of an unknown tetrapod have been recovered from rocks dated 10 million years earlier than Tiktaalik. The authors say that the trackways occur in rocks that: “can be securely assigned to the lower-middle Eifelian, corresponding to an age of approximately 395 million years”. At a stroke, this rules out not only Tiktaalik as a tetrapod ancestor, but also all known representatives of the elpistostegids. The arrival of tetrapods is now considered to be 20 million years earlier than previously thought and these tetrapods must now be regarded as coexisting with the elpistostegids. Once again, the fossil record has thrown up a big surprise, but this one is not “entirely compatible with evolutionary thinking”. It is a find that was not predicted and it does not fit at all into the emerging consensus.

“Now, however, Niedzwiedzki et al. lob a grenade into that picture. They report the stunning discovery of tetrapod trackways with distinct digit imprints from Zachemie, Poland, that are unambiguously dated to the lowermost Eifelian (397 Myr ago). This site (an old quarry) has yielded a dozen trackways made by several individuals that ranged from about 0.5 to 2.5 metres in total length, and numerous isolated footprints found on fragments of scree. The tracks predate the oldest tetrapod skeletal remains by 18 Myr and, more surprisingly, the earliest elpistostegalian fishes by about 10 Myr.” (Janvier & Clement, 2010)

The Nature Editor’s summary explained: “The finds suggests that the elpistostegids that we know were late-surviving relics rather than direct transitional forms, and they highlight just how little we know of the earliest history of land vertebrates.” Henry Gee, one of the Nature editors, wrote in a blog:

“What does it all mean?
It means that the neatly gift-wrapped correlation between stratigraphy and phylogeny, in which elpistostegids represent a transitional form in the swift evolution of tetrapods in the mid-Frasnian, is a cruel illusion. If – as the Polish footprints show – tetrapods already existed in the Eifelian, then an enormous evolutionary void has opened beneath our feet.”

For more, go here:
Lobbing a grenade into the Tetrapod Evolution picture
http://www.arn.org/blogs/index.php/literature/2010/01/09/lobbing_a_grenade_into_the_tetrapod_evol

Additional note: The Henry Gee quote is interesting for the words “elpistostegids represent a transitional form”. In some circles, transitional forms are ‘out’ because Darwinism presupposes gradualism and every form is no more and no less transitional than any other form. Gee reminds us that in the editorial office of Nature, it is still legitimate to refer to old-fashioned transitional forms!

Comments
Measuring Information/ specified complexity follow-up threadJoseph
January 21, 2010
January
01
Jan
21
21
2010
02:37 PM
2
02
37
PM
PDT
Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. In virtue of their function, these systems embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the same sense required by the complexity-specification criterion (see sections 1.3 and 2.5). The specification of organisms can be crashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of minimal function of biochemical systems.- Wm. Dembski page 148 of NFL
In the preceding and proceeding paragraphs William Dembski makes it clear that biological specification is CSI- complex specified information. In the paper "The origin of biological information and the higher taxonomic categories", Stephen C. Meyer wrote:
Dembski (2002) has used the term “complex specified information” (CSI) as a synonym for “specified complexity” to help distinguish functional biological information from mere Shannon information--that is, specified complexity from mere complexity. This review will use this term as well.
Joseph
January 21, 2010
January
01
Jan
21
21
2010
02:35 PM
2
02
35
PM
PDT
Heinrich -- It does raise the question of what is the difference between CSI and FCSI. The glossary actually attempts to address that with FSCI being a subset of CSI and, to me, using functionality as a marker for specificity. Forgive the earlier snippyness, but, again to me, CSI was always meant to be Boolean as used by Dembski i.e. IF CSI THEN Design = True FSCI, and Jerry has a better handle on it than me, always struck me as being more descriptive and scalable.tribune7
January 21, 2010
January
01
Jan
21
21
2010
02:18 PM
2
02
18
PM
PDT
"This is not true if there is selection involved at each step" What selection? Where could there have been selection in the first life form or before it? It was supposed to take place afterwards. Nakashima will help you out as some non life chemical processes require more resources than others and are more efficient at using them so they will be the ones to survive and use all the resources and are our true ancestors. It is not a single celled ancestor that started it all but a chemical reaction. We have to learn to respect our proper ancestor more. Of course I am being facetious. Where did ATP synthase come from? It had to be there at the start. There is no step wise assembly for these 2000 amino acids that make sense. How many steps were involved. And in order to use the step wise argument one needs to point out the near infinite number of possibilities that it could choose from and how we are just the lucky accident. But no one can point to one alternative life system let alone many. So did this hypothetical stepwise process really only have a few obvious steps that can be chosen at each hypothetical step, then that means that somehow life was built into the system or the dice are fixed and the theistic evolutionists would be ecstatic because they found out how God did it and they would then join the ID ranks because they could see the actual design. But people like Nakashima would be dismayed. Their whole inner being is to show we are just the luck of the draw whether it be the universe or the chemical reaction. Their step wise procedure requires a myriad of possibilities and so far there is only one. A universe that was so fine tuned to specify ATP synthase would be too much for them to take. And getting to ATP synthase is beyond the resources of the mutlt-verse. ATP Sythase is like a Shakespeare play and you and others are saying the equivalent of a Shakespeare play can be built one word at a time because as each word combination is assembled it will be selected by some unknown process for some unknown advantage. But in the end we will have Henry V and the famous speech at Agincourt. Just "We few, we happy few, we band of brothers" would be amazing but the whole play. Oh, I realize this is a little hyperbole. We would only have Act IV Scene 3. In case you think I am be discriminating by picking on ATP syntase, there are lots of others that could take its place. It is just one of the largest elephants in the room but there is whole herd in there.jerry
January 21, 2010
January
01
Jan
21
21
2010
02:03 PM
2
02
03
PM
PDT
tribune7 @ 283 -
Hey Heinrich go back and try Post 261 again and tell us where I mention FCSI.
Ah, my apologies - no F. It does raise the question of what is the difference between CSI and FCSI. And FI (Functional Information) too. CJYman @ 281 -
CSI will most definitely give us a value when all the variables are inputted and the equation is worked out, however it can also be treated in a boolean fashion as someone has stated above. ... However, we will have a “lowest possible value” of CSI for that event and if that value > 1, then we know that the event does indeed exhibit CSI. In this way, we can treat measuring for CSI, not as a strict comparative measurement (although this is possible and I believe useful), but as a yes/no answer to the question, “Does this event exhibit CSI?” or “Is this event intelligently designed?”
This looks like a recipe for confusion: you're using CSI to mean 2 different things. If CSI<1, then CSI=1! Informally, people will say things like this, but I think it would help if the two concepts were kept separate.Heinrich
January 21, 2010
January
01
Jan
21
21
2010
01:41 PM
1
01
41
PM
PDT
Mr Vjtorley, @265 you reference the paper by Kalinsky. I am reminded of joke about physicists where the punchline is "Asssume a spherical cow of unit radius..." As much as Kalinsky is on the right track to follow up on Hazen's suggestion of how to measure functional information, it is the simplifying assumptions in his work which eventually make it useless for the purposes you wish to put it. The chief simplifying assumption goes into esitmating I_nat, the amount of functional information that can be created by a natural process. Kalinsky assumes this can be estimated by a number of repeated blind trials. This is the classic tornado in the junkyard assumption, and it is not "generous" of him to make it. A more realistic assumption would be of a fitness function that included the laws of physics and chemistry, but no intelligent process. Unfortunately, such an assumption is not simple. The inputs to year 2 are the outputs of year 1 (rather than starting from scratch) and so on for 500 million years in Kalinsky's model. Kalinsky could of course justify his first assumption by stating that whatever products are created in year 1 will degrade before they can be used in year 2. But this gets to questions of specific rates of build up and degradation of molecules in some combination of atmosphere, solar flux, water temperature and pressure, etc. If kalinsky truly wanted to be 'generous' in his assumptions, he would assume that molecules do accumulate over time at rates related to their chemistry in specific conditions, and the presence of other molecules with different rates of reaction with the same feedstocks. Given a target of a 30,000 bit genome and a 500 million year time period, the process of net accumulation of functional information in the biosphere would have to proceed at the glacial pace of 1 bit every 20,000 years or so. Can the laws of physics and chemistry, using the resources of the entire planet, accumulate 1 bit every 20,000 years? The key word there is accumulate, and that is the idea that Kalinsky needs to incorporate into his assumptions to get more realistic estimates.Nakashima
January 21, 2010
January
01
Jan
21
21
2010
01:29 PM
1
01
29
PM
PDT
"But when I used that definition and example," What did I change and what example did I change. If you are referring to the whole genome, that was a joke and explained why no one in their right mind would ever do it or even be interested in it from a FSCI point of view. I did not change anything. Let me know what I said, so I can retract if I was wrong or explain it better to you.jerry
January 21, 2010
January
01
Jan
21
21
2010
11:28 AM
11
11
28
AM
PDT
Some think they can disguise it under the pretext of saying they are just trying to learn and understand but it is the lack of affirmation on anything that is the dead giveaway. Jerry, that is very true.tribune7
January 21, 2010
January
01
Jan
21
21
2010
10:30 AM
10
10
30
AM
PDT
jerry at 282, “Let’s avoid any more unpleasant misunderstandings by getting a rigorous mathematically definition of how to measure your metric and what it means.” You were given a definition and an example. But when I used that definition and example, you objected and changed it. Hence my request for a detailed definition. I'm doing you the courtesy of taking your claims seriously enough to investigate them more thoroughly. The minimal courtesy I would expect in response is for you to provide a more detailed explanation where necessary. Are you genuinely interested in discussing how to identify design objectively or not?Mustela Nivalis
January 21, 2010
January
01
Jan
21
21
2010
10:22 AM
10
10
22
AM
PDT
Jerry says, "And a series of events have the same probability as all the events happening at once." This is not true if there is selection involved at each step. My dice example at 169 shows clearly that if each step has a law which selects for some state over another, then the probability of the final state being reached through a set of steps is most definitely not the same as if they happened all at once.Aleta
January 21, 2010
January
01
Jan
21
21
2010
10:13 AM
10
10
13
AM
PDT
It is interesting that Jerry has upped his level of being rude and insulting. I don't see his opponents in these discussions behaving in this way. Just an observation.Aleta
January 21, 2010
January
01
Jan
21
21
2010
09:08 AM
9
09
08
AM
PDT
"What is it with anti-IDists and their inability to comprehend what they read?" Oh they comprehend but they know that they have no belief of their own that they can hold up to dispute what is said, so they nit pick and criticize in anyway they think they can. Most of the time it is drivel as pointed out to Mustela Nivalis. It is an interesting phenomena that all their desires could be fulfilled if they only had something they could defend themselves. But since they don't what we see is the continual attempts to belittle even the littlest of statements made by the pro ID people. It is the instant criteria to determine if someone is honest and legitimate here. Some think they can disguise it under the pretext of saying they are just trying to learn and understand but it is the lack of affirmation on anything that is the dead giveaway. So for you future anti IDers who are lurking, know that is how we can identify you very quickly. Agree that we have some good points and you will confuse us for awhile but then inconsistency will eventually reign and we will know. For me personally, I often ask questions or make answers that are designed to ferret out what one believes. It is not hard to bait the anti IDer, they oblige very quickly. Neutral or pro ID people behave much differently.jerry
January 21, 2010
January
01
Jan
21
21
2010
08:38 AM
8
08
38
AM
PDT
Heinrich --But then tribune7 is suggesting that FCSI is Boolean, What is it with anti-IDists and their inability to comprehend what they read? Hey Heinrich go back and try Post 261 again and tell us where I mention FCSI.tribune7
January 21, 2010
January
01
Jan
21
21
2010
08:11 AM
8
08
11
AM
PDT
"Let’s avoid any more unpleasant misunderstandings by getting a rigorous mathematically definition of how to measure your metric and what it means." You were given a definition and an example. It is easy to take this example and expand it if anyone would want to but I made the point that this would be ridiculous just as many of the subsequent comments were ridiculous. The whole discussion of complete genomes was a joke, started by a clown and I humored him by providing a list to let the absurdity play out. Do you understand it was a way of showing up this clown as a clown. But some people seemed to think it was serious. And I did give a serious attempt by listing some criteria that would be the basis of a discussion. But the criteria were not FSCI specifically and I pointed out reasons for it not being appropriate. Any serious attempt at this would never be FSCI. It is not necessary. What you call unpleasant misunderstandings is the constant nit picking on irrelevant points. You were given an example, a calculation and told that the example exhausts all the resources of the universe and all the multi-verses they can logically dream of. Just one operating protein, actually a series of proteins acting in concert. Now if you want to continue and detail the information for every coding region, be my guess but I said it was absurd and it is. And a series of events have the same probability as all the events happening at once. I gave you the rationale for that. Yes the odds of winning the lottery twice is different if you once won the lottery but it begs the question of how did you win the first lottery. For someone to say that each step along the way is easy, I suggest the construction of sentences by random processes and here is a domain which we know contains untold number of functional elements but we have no such assurance for DNA and life. When I say what you say is drivel, is because you conveniently ignore anything that contradicts your proposition. Thus, to ignore criticism that is relevant means that subsequent comment are most likely drivel. So it was an accurate characterization and not necessarily unplesant misunderstandings but a consistent pattern.jerry
January 21, 2010
January
01
Jan
21
21
2010
07:48 AM
7
07
48
AM
PDT
I don't have time at the moment to defend my two cents ... but here they are anyway ... CSI will most definitely give us a value when all the variables are inputted and the equation is worked out, however it can also be treated in a boolean fashion as someone has stated above. If the variables are treated in a way so as to give the ID critic the maximum benefit of the doubt, the CSI will be calculated so as to produce a lower limit of the CSI for that event. Further acquisition of data can help us to refine the measurement and make it more accurate and precise. However, we will have a "lowest possible value" of CSI for that event and if that value > 1, then we know that the event does indeed exhibit CSI. In this way, we can treat measuring for CSI, not as a strict comparative measurement (although this is possible and I believe useful), but as a yes/no answer to the question, "Does this event exhibit CSI?" or "Is this event intelligently designed?"CJYman
January 21, 2010
January
01
Jan
21
21
2010
06:49 AM
6
06
49
AM
PDT
Mustela, I think you mean "quantitative measurement"in 1).Aleta
January 21, 2010
January
01
Jan
21
21
2010
06:33 AM
6
06
33
AM
PDT
tribune7 at 276, You mean like a boolean one? I was very clear about what I'm asking for, just as it is very clear that you continually ask questions in a transparent attempt to evade supporting your claims. If you've got an example calculation of CSI that meets the criteria I've specified, let's see it. If you have a measurement of a different qualitative characteristic that you believe uniquely identifies design, let's see that. If all you have is more evasions, we've already seen those.Mustela Nivalis
January 21, 2010
January
01
Jan
21
21
2010
06:21 AM
6
06
21
AM
PDT
I'm confused again. In 264, vjtorley pointed me to a calculation of “functional information”, when I asked for a calculation of FCSI. I can only assume that they are the same. But then tribune7 is suggesting that FCSI is Boolean, but that's something different. Can someone explain? What have I missed?Heinrich
January 21, 2010
January
01
Jan
21
21
2010
06:20 AM
6
06
20
AM
PDT
Mustela -- Why will you not either produce such a calculation You mean like a boolean one?tribune7
January 21, 2010
January
01
Jan
21
21
2010
06:14 AM
6
06
14
AM
PDT
tribune7 at 274, Mustela, you really don’t understand this, do you? Here's what I do understand: 1) Dembski and other ID proponents (including you, as cited earlier in this thread) assert that CSI is a qualitative measurement that uniquely identifies design. 2) I have yet to find an example of CSI, as described in No Free Lunch, calculated for a real biological artifact, taking into account known physics, chemistry, and evolutionary mechanisms. Here's what I don't understand: Why will you not either produce such a calculation or simply admit that you do not have one?Mustela Nivalis
January 21, 2010
January
01
Jan
21
21
2010
05:51 AM
5
05
51
AM
PDT
A boolean measurement works as well, though. Mustela, you really don't understand this, do you?tribune7
January 21, 2010
January
01
Jan
21
21
2010
05:34 AM
5
05
34
AM
PDT
jerry at 262, “Those of us on the other side of the ID debate still want to see how FCSI can be calculated for a realistic biological problem Can you show us a calculation?” It has already been done I dispute that. Thus far, despite reading all of the relevant material and asking repeatedly for further assistance on this blog, I have never seen a calculation of CSI, as desribed in No Free Lunch, for a real biological artifact that takes into account known physics, chemistry, and evolutionary mechanisms. If you have such a calculation, cite or reproduce it, don't just assert it.Mustela Nivalis
January 21, 2010
January
01
Jan
21
21
2010
03:54 AM
3
03
54
AM
PDT
tribune7 at 261, Mustela, we may be talking past each other. You seem to see CSI as being designed to create a graduated scale of items according to design content. I see it as being meant to be more of a Boolean logic gate — if it has CSI it is designed, if it doesn’t, it may/may not be designed. I don’t think anybody has tried to scale it, or felt it necessary to do so. Fair enough. Since it appears to be measured in bits, from what I've read, it seems that it should be possible to rank objects in CSI order. A boolean measurement works as well, though. Do you have a worked example of such?Mustela Nivalis
January 21, 2010
January
01
Jan
21
21
2010
03:52 AM
3
03
52
AM
PDT
jerry at 259, “In that case, the naive calculation of two to the number of bits required to describe the artifact is not a good definition of FCSI. There are several known and observed types of mutations that can increase the size of a genome, for example, including fully replicating it. Those would increase FCSI by that measure, with no intelligent intervention required.” What drivel! There is no need for that kind of response in what should be a civil discussion. The FSCI applies to the specific coding areas, not the whole genome. That was not part of your original definition of FSCI (is it FSCI or FCSI?). Are you now saying that FCSI is defined as the number of bits required to describe the coding regions in a genome and that this measurement uniquely identifies intelligent intervention? Let's avoid any more unpleasant misunderstandings by getting a rigorous mathematically definition of how to measure your metric and what it means.Mustela Nivalis
January 21, 2010
January
01
Jan
21
21
2010
03:48 AM
3
03
48
AM
PDT
Sorry, that should have been "tribune7 at 258" not 218.Mustela Nivalis
January 21, 2010
January
01
Jan
21
21
2010
03:43 AM
3
03
43
AM
PDT
tribune7 at 218, Complexity is not CSI. Do you believe complexity can be calculated? I don't care. I'm just here for the CSI. What predictions does (ID) make? That nothing that is irreducibly complex will be found not to have been designed. You left out the first question: What is the scientific theory of ID? You'll also need to define exactly what you mean by "irreducibly complex" and identify one or more objects that meet that criteria. That no pattern of a particular complexity showing a specificity — and functionality is a fine maker for it — the probability of which occurring being, well, very, very low, will be found not to have been designed. This sounds like CSI, and there is still no extant example of CSI, as described in No Free Lunch, being calculated for a real biological artifact, taking into account known physics, chemistry, and evolutionary mechanisms. How would a test of those predictions put ID at risk of disconfirmation? It would show that the things that ID says shows design, don’t. Unless you can identify a specific biological construct that definitely exhibits irreducible complexity according to your (as yet to be elucidated) theory, this isn't a prediction that would server to falsify it. If a particular biological artifact is explained through non-intelligent mechanisms, you can simply say "Okay, that one wasn't really irreducibly complex, but this one over here is." More rigor is needed.Mustela Nivalis
January 21, 2010
January
01
Jan
21
21
2010
03:42 AM
3
03
42
AM
PDT
Sure. Jerry (#155) included a link to a paper by K. D. Kalinsky, entitled, “Intelligent Design: Required by Biological Life?” which can also be accessed online at http://www.arn.org/blogs/index.....cal_life_r . It’s well worth reading; you’ll find the biological applications in section V.
That's for functional information. Hoe does this differ from FCSI? Indeed, how does it differ from CSI? And what biological relevance does this have? I know what I'm about to write has been pointed out numerous times, but I've never seen an answer that is convincing. But here goes (again)... These calculations of F/CS/FCS I are based on the tornado in a junkyard scenario: if we randomly pick one configuration, what is the probability of picking one from our target? But evolution doesn't work like that, it's a process of improving fitness. So how do these calculation relate to the likelihood of an evolved structure? As far as I can tell (and I'm willing to be corrected), Dr. Dembski accepts this criticism, which is why he has pursued his active information ideas to quantify how much better natural selection does than random search, and then move the argument on to whether this improvement can be achieved through natural means.Heinrich
January 21, 2010
January
01
Jan
21
21
2010
02:10 AM
2
02
10
AM
PDT
VJ at 264. As is usual, you've been succint. In fact, you've been too succint. There will be immediate objections. Most probably those objections will be that you haven't been succint at all, or that at a minimum, you've flagrantly missed the point altogether. Whatever the objection, it should be understood that this a position for the opposition that must be defended at all costs. There is, of course, a reason for this. As anyone with even a furball of human intiuition can attest - the position being defenfed (that ID folks can't do the math, and therefore ID isn't real science) is hallowed ground, and cannot under any circumstances be reliquished. At the same time, all the tough questions posed to the opposition on this thread go un-noticed, ignored, and unanswered. This all takes the form of the defender's skillful parsing of words - reaching the level of performance art - all self-servingly disguised as a legitimate search for clarity and understanding. Obfuscation rules the day...as well as a demonstrable refusal to address the pertinent questions asked in return. It's a strategist field day.Upright BiPed
January 20, 2010
January
01
Jan
20
20
2010
11:55 PM
11
11
55
PM
PDT
"I thought most ID supporters were fairly certain that function would eventually be found for the whole genome or most of it – that there is no such thing as junk DNA. " I am not one to say that all non coding DNA will have function even though a high percentage of it is transcribed. Maybe much will but that is to be determined. It is probably not 0% and it is probably not 100% They keep on learning every day about different genomes.jerry
January 20, 2010
January
01
Jan
20
20
2010
10:14 PM
10
10
14
PM
PDT
"Or are you just saying that operationally, we can’t calculate FCSI unless we’ve already determined function. Once we determine the function of non-coding regions we can expand the FCSI calculation to those areas as well? Thanks in advance for a clarification!" You got it right.jerry
January 20, 2010
January
01
Jan
20
20
2010
10:09 PM
10
10
09
PM
PDT
1 3 4 5 6 7 14

Leave a Reply