Uncommon Descent Serving The Intelligent Design Community

Lobbing a grenade into the Tetrapod Evolution picture

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

A year ago, Nature published an educational booklet with the title 15 Evolutionary gems (as a resource for the Darwin Bicentennial). Number 2 gem is Tiktaalik a well-preserved fish that has been widely acclaimed as documenting the transition from fish to tetrapod. Tiktaalik was an elpistostegalian fish: a large, shallow-water dwelling carnivore with tetrapod affinities yet possessing fins. Unfortunately, until Tiktaalik, most elpistostegids remains were poorly preserved fragments.

“In 2006, Edward Daeschler and his colleagues described spectacularly well preserved fossils of an elpistostegid known as Tiktaalik that allow us to build up a good picture of an aquatic predator with distinct similarities to tetrapods – from its flexible neck, to its very limb-like fin structure. The discovery and painstaking analysis of Tiktaalik illuminates the stage before tetrapods evolved, and shows how the fossil record throws up surprises, albeit ones that are entirely compatible with evolutionary thinking.”

Just when everyone thought that a consensus had emerged, a new fossil find is reported – throwing everything into the melting pot (again!). Trackways of an unknown tetrapod have been recovered from rocks dated 10 million years earlier than Tiktaalik. The authors say that the trackways occur in rocks that: “can be securely assigned to the lower-middle Eifelian, corresponding to an age of approximately 395 million years”. At a stroke, this rules out not only Tiktaalik as a tetrapod ancestor, but also all known representatives of the elpistostegids. The arrival of tetrapods is now considered to be 20 million years earlier than previously thought and these tetrapods must now be regarded as coexisting with the elpistostegids. Once again, the fossil record has thrown up a big surprise, but this one is not “entirely compatible with evolutionary thinking”. It is a find that was not predicted and it does not fit at all into the emerging consensus.

“Now, however, Niedzwiedzki et al. lob a grenade into that picture. They report the stunning discovery of tetrapod trackways with distinct digit imprints from Zachemie, Poland, that are unambiguously dated to the lowermost Eifelian (397 Myr ago). This site (an old quarry) has yielded a dozen trackways made by several individuals that ranged from about 0.5 to 2.5 metres in total length, and numerous isolated footprints found on fragments of scree. The tracks predate the oldest tetrapod skeletal remains by 18 Myr and, more surprisingly, the earliest elpistostegalian fishes by about 10 Myr.” (Janvier & Clement, 2010)

The Nature Editor’s summary explained: “The finds suggests that the elpistostegids that we know were late-surviving relics rather than direct transitional forms, and they highlight just how little we know of the earliest history of land vertebrates.” Henry Gee, one of the Nature editors, wrote in a blog:

“What does it all mean?
It means that the neatly gift-wrapped correlation between stratigraphy and phylogeny, in which elpistostegids represent a transitional form in the swift evolution of tetrapods in the mid-Frasnian, is a cruel illusion. If – as the Polish footprints show – tetrapods already existed in the Eifelian, then an enormous evolutionary void has opened beneath our feet.”

For more, go here:
Lobbing a grenade into the Tetrapod Evolution picture
http://www.arn.org/blogs/index.php/literature/2010/01/09/lobbing_a_grenade_into_the_tetrapod_evol

Additional note: The Henry Gee quote is interesting for the words “elpistostegids represent a transitional form”. In some circles, transitional forms are ‘out’ because Darwinism presupposes gradualism and every form is no more and no less transitional than any other form. Gee reminds us that in the editorial office of Nature, it is still legitimate to refer to old-fashioned transitional forms!

Comments
jerry at 172, I am sorry but your analogy does not work because it is not parallel. It works to show that your claim that the two probabilities are the same is incorrect. It assumes two things, that sub parts are all functional and are added to a functional part so each part on the way up is functional. That is core to modern evolutionary theory. Only those organisms that survive to reproduce are represented in subsequent generations. Every generation consists of viable individuals, by definition. This is a new form of irreducible complexity in the sense that a complicated part, a protein or protein RNA polymer combination, has a countless functional sub parts. That could be true but what are the odds of finding the final combination at each stop along the way. Only those viable combinations will be represented in subsequent populations. Bad combinations result in the death of the host. Nature is quite profligate in that way. And for proteins how big is each new addition and how big was the first step. It is the same probability unless you suppose as in Yahtzee that there are a myriad of possible combinations that are appropriate The great diversity of life we observe suggests strongly that this is the case. However, you've gotten to an important point. If you want to posit CSI as a characteristic that is unique to designed objects, you must calculate it in such a way that non-designed mechanisms are taken into account. We have observed significant amounts of mutation, including speciation, both in the lab and in the wild. Any calculation of CSI that ignores those mechanisms does not reflect reality and hence can't be used to identify design in the real world. If you want to retain the naive calculation of two to the length of the genome for CSI, you need to provide solid evidence that those mechanisms we observe cannot affect the probability of the artifact under consideration forming. That is going to be difficult because we have observed mutations that change the length of genomes. Using the naive calculation, this means that natural processes can create CSI.Mustela Nivalis
January 18, 2010
January
01
Jan
18
18
2010
11:50 AM
11
11
50
AM
PDT
Mustela, I read your entire post. You presented an intelligently designed search as example of a random search. A player achieves function (a full house) by intelligently picking up the die that don't match the intended and necessary pattern. If pointing this out was inappropriate, I do apologize.Upright BiPed
January 18, 2010
January
01
Jan
18
18
2010
11:43 AM
11
11
43
AM
PDT
Clive, The fallacy of Cabal's inane objection is that he is claiming what can happen by random events could not happen with an intelligent intervention which by the way could observe some random processes and maybe get some ideas. Sort of shows how desperate they have to get some times.jerry
January 18, 2010
January
01
Jan
18
18
2010
11:28 AM
11
11
28
AM
PDT
"Warning! Warning! Analogy Alert! Warning! Warning!" I am sorry but your analogy does not work because it is not parallel. It assumes two things, that sub parts are all functional and are added to a functional part so each part on the way up is functional. This is a new form of irreducible complexity in the sense that a complicated part, a protein or protein RNA polymer combination, has a countless functional sub parts. That could be true but what are the odds of finding the final combination at each stop along the way. And for proteins how big is each new addition and how big was the first step. It is the same probability unless you suppose as in Yahtzee that there are a myriad of possible combinations that are appropriate, or you know the end result and you have multiple chances at each step. Are there alternative worlds in biology? We have no evidence for that so to assume it is so is just another form of speculation. What kind of resources would it take to build something like that if multiple worlds were feasible? No one has identified another DNA world let alone another life/chemical world so the best we can assume is that there might be a couple. Thus, to get to the one we have you can roll the dice once or roll it umpteen bazillion times and the probabilities would essentially be the same or lo and behold you might reach a countless bazillion dead ends instead of the one or few viable end states. Take a number, 49728506937167167054, which I got by hitting keys in a frantic fashion. How long would it take to get to this number, one roll at a time and when the number was correct you were told so. On average it would take 5^20 times. (average of 5 times each to get to the correct number but only if you knew the right number when it was rolled.) Now it would take much less if you knew the number ahead of time and rolled several dice each time and took only appropriate numbers as Dawkins does in Weasel. But we do not know what the number is so even with multiple rolls we do not know if we are on the correct path to the right number. So the probabilities of getting this number all at once or through multiple rolls is still 1 in 10^20. And we do not know if we choose a certain number that it will not lead to a dead end because unlike Weasel there may be no viable path for a correct number once you are down a certain path. If at 4972 you choose 3 instead of 8 all the subsequent rolls are useless if 49723 led to no viable end state. No for your scenario to have any semblance of meaning it must assume a bazillion to the umpteenth power of viable states. We know we have one. Are there any others? I doubt there are too many if any others. Good try though. I know it is the standard answer but it does not hold up under examination. Since you claim you are a computer programmer, try a sort of real life example. Generate combinations of letters of the alphabet and use a dictionary to determine when a viable word is formed. See how long it takes to get a 175 word ten sentence paragraph and then see how it reads. I do not know how you could write a program that would recognize a coherent sentence let alone a coherent paragraph such that each subsequent sentence related to the one before it. But I bet those monkeys would be still be typing when the next millennium came around. We know there is more than one viable paragraph. We do not know if there is more than one viable life system.jerry
January 18, 2010
January
01
Jan
18
18
2010
11:21 AM
11
11
21
AM
PDT
Upright Biped at 166, translation: “Just because I am going to completely ignore the issue and present a designed search as an example of random processes, doesn’t mean random processes can’t do the same thing” You really should read the entire post before attempting to flame. Jerry said "Saying it happened step wise does not solve the problem. The probabilities are essentially the same." I noted that his claim was incorrect and provided an analogy explaining why. If you'd like to defend Jerry's claim, I'd be interested to hear your argument.Mustela Nivalis
January 18, 2010
January
01
Jan
18
18
2010
11:09 AM
11
11
09
AM
PDT
Edit - last sentence was supposed to say "ruling out pure chance" and not just "ruling out chance". My apologies.Aleta
January 18, 2010
January
01
Jan
18
18
2010
11:07 AM
11
11
07
AM
PDT
I'd like to add to what Mustela is saying. When Jerry at 160 wrote, "Saying it happened step wise does not solve the problem. The probabilities are essentially the same.", Mustela replied, "Evolution operates on existing, working components and proceeds in small, incremental steps. The odds of getting something that works by making a small change to something that already works are much better than getting something that works by randomly combining a large number of components." Mustela is right, and Jerry is wrong in regards to how the real world works. If these are changes along the way that lead via steps from a starting state to an ending state, then the probabilities of something happening stepwise are most definitely not the same as having the result happen all at once. Over on the ID and Common Descent thread, I gave the following example, which is like the Yahtze example, but a little more straightforward: [start repost] Here’s an example to provide some more detail to explain what I mean. Throw 10 dice. What is the probability that they will be all sixes? Easy problem: (1/6) ^ 10 = 1 out of 60,000,000, approximately Harder problem: The ten dice are in a box which periodically jiggles hard enough to toss all the dice. However, the sides with a 1 on it are sticky, so if a dice comes up six, it sticks. Now the box jiggles five times. What is the probability that after five jiggles you have all sixes. This is more complicated. First, for any one throw you need to calculate the probability of getting no sixes, one six, two sixes, etc., so you have to use the binomial probability theorem. Then, for each subsequent throw you have a different number of dice being jiggled (ten if no sixes, 9 if one six,etc.), so you have both a continued use of the binomial probability theorem and a complex probability tree that branches ten times on the first throw and some varying numbers on each of the subsequent throws. This second situation is more like the real world: it has a sequence of events (it models the passage of time in a very simple way) and it has laws (the one side sticks) that add an element of direction and selection. At a vastly more complicated level, this is what ID advocates needs to be trying to do if they want to meaningingfully provide a probability calculation that might imply design. Such calculations need to take into account a sequence of steps over time and they need to take into account that various laws of physics and chemistry create changes along those series of steps that then change the types of changes that can further happen. Only by trying to do such will ID advocates being working towards an accurate mathematical model of the world. This is why Mustela and others are asking for a method – one that can be reliably used by any interested party, to measure the CSI of a biological entity. Until such a method is developed, shared, and tested by multiple sources, the idea of CSI will be unusable. [end repost] Note also, that in reply to Mustela's comment that "no biologist suggests that these genomes came together de novo or randomly", Jerry said "Well then just how did these systems come together."? That is a legitmiate question, and the big question of interest, but it has no bearing on the issue at hand, which is that calculations based on a pure chance hypothesis are irrelevant and certainly are not an argument for ID. That is, ruling out chance (which no one believes happens anyway) can leave us in a state of not knowing how something happens, but it is not evidence that therefore ID happened.Aleta
January 18, 2010
January
01
Jan
18
18
2010
10:55 AM
10
10
55
AM
PDT
LoL Sorry Cabal, I am still amused. You make a case that it would require an incredible intelligence to create life, so vast and incredible in fact, that none was necessary! That's just rich.Upright BiPed
January 18, 2010
January
01
Jan
18
18
2010
10:46 AM
10
10
46
AM
PDT
Cabal, where did we get the idea for a wheel, or an alphabet?Upright BiPed
January 18, 2010
January
01
Jan
18
18
2010
10:36 AM
10
10
36
AM
PDT
Warning! Warning! Analogy Alert! Warning! Warning! The contents of this analogy are to be taken as demonstrative only, not to suggest that dice games are identical to biology in every respect.
translation: "Just because I am going to completely ignore the issue and present a designed search as an example of random processes, doesn't mean random processes can't do the same thing"Upright BiPed
January 18, 2010
January
01
Jan
18
18
2010
10:27 AM
10
10
27
AM
PDT
Cabal,
How could the complexity of life enter anyone’s mind? Before a satisfactory explanation; a theory about the stages of practical experience and intellectual effort required is presented I can only assume it all is magic beyond comprehension. Besides, if my understanding of the implications of the theory of FCSI is right an impossible amount of human-style work would be required for design of even even the most primitive creature. Just-so stories about design and designers fail to impress me.
Of course we know that the carriage, car, and knife were invented by intelligent people. The other option is that the flint turned into you and me by itself. Just so stories about flint turning into The Flintstones fail to impress me, much less flint turning into you and me.Clive Hayden
January 18, 2010
January
01
Jan
18
18
2010
10:22 AM
10
10
22
AM
PDT
jerry,
If you want the time and place and method of the designer in action then you will have to go here. https://uncommondescent.com.....ent-342686
Out of curiosity I took a look at it but it made no sense to me. One thing all arguments for ID forget is the difference between human design and hypothetical designers designing from scratch without having any previous knowledge about what they want to design. Just look at us mere mortals. Can anybody point to an example of complex, human design where no previous knowledge was required? Before biology existed, how could any designer know where to begin, where would he have got the idea that biology was a possibility? Would we ever have invented automobiles unless we had seen carriages? A wheelbarrow before we had seen a wheel? A knife before we had seen (and used) a flake of flint? How could the complexity of life enter anyone's mind? Before a satisfactory explanation; a theory about the stages of practical experience and intellectual effort required is presented I can only assume it all is magic beyond comprehension. Besides, if my understanding of the implications of the theory of FCSI is right an impossible amount of human-style work would be required for design of even even the most primitive creature. Just-so stories about design and designers fail to impress me.Cabal
January 18, 2010
January
01
Jan
18
18
2010
10:17 AM
10
10
17
AM
PDT
jerry at 160, “Once again, no biologist suggests that these genomes came together de novo or randomly.” Well then just how did these systems come together. Incrementally, building on previous versions that worked well enough to reproduce in their environments. Remember that modern evolutionary theory deals with how allele frequencies in populations change over time. It presumes the existence of imperfect replicators and differential reproductive success. It does not postulate that complex molecules like genomes arise de novo. Saying it happened step wise does not solve the problem. The probabilities are essentially the same. Mathematically speaking, that's incorrect. Warning! Warning! Analogy Alert! Warning! Warning! The contents of this analogy are to be taken as demonstrative only, not to suggest that dice games are identical to biology in every respect. Have you ever played Yahtzee? The goal of the game is to get a set of dice with particular patterns (four of a kind, five of a kind, full house, run, etc.). The odds of getting one of these patterns by chance on a single roll of five dice are low. In Yahtzee, though, you get three rolls and, most importantly, you can choose to re-roll only a subset of the dice. This greatly increases the odds of getting one of the patterns. Evolution operates on existing, working components and proceeds in small, incremental steps. The odds of getting something that works by making a small change to something that already works are much better than getting something that works by randomly combining a large number of components. Therefore, any calculation of CSI for real biological constructs must take into consideration the incremental processes identified by modern evolutionary theory. Calculations based on de novo creation of large molecules are simply not applicable to the real world.Mustela Nivalis
January 18, 2010
January
01
Jan
18
18
2010
09:50 AM
9
09
50
AM
PDT
jerry
Because FCSI or FSCI is so simple and clear and easy to calculate, many of us recommend it as a substitute for the much more general concept of CSI when discussing evolution or origin of life.
Many of us may recommend FSCI/FISC but what do qualified ID-researchers like Dr. Dembski think of it? On another occasion he described your contributions in the following way:
Your approach, by contrast, from what I can make of it, strikes me as hamfisted.
osteonectin
January 18, 2010
January
01
Jan
18
18
2010
09:22 AM
9
09
22
AM
PDT
Hi Jerry, You posted a peer-reviewed paper by Dunston, Chiu, Abel and Trevors entitled "Measuring the functional sequence complexity of proteins”. If I remember right, that paper is where they measure the functional sequence complexity of proteins. (Although I don't think our friends here intend to read it). If I remember correctly, Dunston, Chiu, Abel and Trevors measure some 30 or 40 proteins as examples. Also if I remember right, the E coli proteome is something like a couple thousand proteins and the pan-proteome is on the order of near 20,000 proteins. Do you think you could have that done by noon so this conversation can move along? I'm ready for these guys to get back to telling us how all those proteins formed by unguided processes and then coordinated themselves into a functioning whole. Since their idea is the ruling paradigm (and must be accepted as fact) I am certain they are just waiting for a chance to show their work (where the results confirm their conclusion). Besides that, the conversation grows stale at their question. It's a little like doubting the veracity of a light year because we haven't measured the distance to all the stars.Upright BiPed
January 18, 2010
January
01
Jan
18
18
2010
09:13 AM
9
09
13
AM
PDT
If you had been here for a long time, there are some of us who want to abandon the term CSI for evolution and use instead FCSI or FSCI instead which is a subset of CSI and easily measured. The reason being is that CSI is too general a concept to be operationalized for every instance of intelligence. For example, how does one operationalize CSI for Mount Rushmore? But it is possible to get an estimate for this paragraph or for a DNA sequence. Both are FCSI. Some DNA sequences would have very low complexity since they are just repeating elements while other sequences would have extremely high complexity since each nucleotide seems to be independent of the previous and next one in the sequence. Now some of these highly complex sequences specify a function, namely they specify an amino acid sequence that has function using an intermediary system of about a thousand parts to do so. So the sequence is complex, is information, specifies a completely independent entity that has function. Because FCSI or FSCI is so simple and clear and easy to calculate, many of us recommend it as a substitute for the much more general concept of CSI when discussing evolution or origin of life. The designation of FCSI is nothing more than pointing to the transcription/translation system in biology so if you do not like our name for it, then just use that concept. And as I said, Hazen, Abel and others are working on this area and giving probabilities to various sequences. If you want to do a whole genome, then have at it. A bacteria genome would probably take a couple months to set up but once set up a computer could make an estimate in a short time. But just one or two elements of the genome would exhaust the resources of the universe since the beginning of time so estimating the entire genome would be sort of pointless. Just try ATP synthase for a start. And as far as "Once again, no biologist suggests that these genomes came together de novo or randomly." Well then just how did these systems come together. As I said just one coding region is so improbable that it exhausts the resources of the universe since the beginning of time. Saying it happened step wise does not solve the problem. The probabilities are essentially the same.jerry
January 18, 2010
January
01
Jan
18
18
2010
08:57 AM
8
08
57
AM
PDT
"Well, I guess that would explain why everyone talks about CSI and FCSI, but I can’t find any examples of anyone actually calculating it" Well I did tell you how to calculate it and all that is needed is one good size gene-protein coding region to get a number that is so improbable that naturalistic evolution becomes unreasonable. Didn't you see that? Calculating it for a whole genome is what would be absurd and not necessary when one gene would be enough. All would be estimates but it is possible to lower bounds on it. I also gave you some references of scientists who are using essentially the same idea to calculate probabilities for functional sequences. "Having lurked for a while before registering here, I have seen a theme of anti-ID people saying ID is only a negative argument against evolution and the ID people saying it is a positive argument. But, I am can only interpret the rest of your comment as “evolution can’t do this.”" As to what ID is about, I have written four long comments about that and offered it up again on this thread. Go to comment # 110. After reading this ask any questions you want. ID is both a positive and a negative approach but because of the nature of intelligent intervention, it is mostly an examination of the laws of nature and where they did not or could not have produced certain effects. But that is not all it is. If you want the time and place and method of the designer in action then you will have to go here. https://uncommondescent.com/darwinism/a-frightening-admission/#comment-342686jerry
January 18, 2010
January
01
Jan
18
18
2010
08:33 AM
8
08
33
AM
PDT
Mustela:
Could you provide a CSI calculation for just two of the smaller organisms in the list, namely Lenski’s e. coli? I believe the genome changes are well-documented.
This is an intriguing idea, but I still would like to understand how best to rank order the list according to complexity.efren ts
January 18, 2010
January
01
Jan
18
18
2010
06:25 AM
6
06
25
AM
PDT
Jerry at 155, “I guess then there is no point in me trying to cajole you into calculating FCSI for any of the items on the list in comment 121 above, huh?” That would be absurd. There are in eukaryote multi-celled organisms several thousand genes and in humans about 20,000-25,000. The improbability of the average one exhausts the physical resources of the universes through random self assembly. Once again, no biologist suggests that these genomes came together de novo or randomly. More importantly, CSI is supposed to be "a reliable, empirically observable sign of intelligence" and there supposedly exists "a probability and information theory based explicitly formal model for quantifying CSI" (both quotes from the glossary). CSI is supposed to be the positive evidence supporting ID, but despite repeated requests and searching through the available literature, I have not yet found an example of how to calculate CSI, as described in No Free Lunch, for a real biological system, taking into account known physics, chemistry, and evolutionary mechanisms. efren ts is correct in 156 that your argument in 155 is primarily an attempt to refute modern evolutionary theory rather than to positively support ID. Modern evolutionary theory is certainly an interesting topic, but this site is supposed to be where people can learn about the positive evidence for design. Could you provide a CSI calculation for just two of the smaller organisms in the list, namely Lenski's e. coli? I believe the genome changes are well-documented.Mustela Nivalis
January 18, 2010
January
01
Jan
18
18
2010
04:45 AM
4
04
45
AM
PDT
Jerry:
That would be absurd.
Well, I guess that would explain why everyone talks about CSI and FCSI, but I can't find any examples of anyone actually calculating it. Unfortunate, but maybe I don't need anyone to actually calculate anything to dig into the details here.
So the anti ID people have to either show that the particular gene, while improbable, is just the lucky winner of a lottery
Having lurked for a while before registering here, I have seen a theme of anti-ID people saying ID is only a negative argument against evolution and the ID people saying it is a positive argument. But, I am can only interpret the rest of your comment as "evolution can't do this." So, since you are unwilling to calculate CSI/FCSI (and assuming no one else is going to step up the task), let me see if there is another way. How would an ID proponent determine the complexity of the things listd in comment 121 if not by CSI/FCSI?efren ts
January 18, 2010
January
01
Jan
18
18
2010
03:10 AM
3
03
10
AM
PDT
"I guess then there is no point in me trying to cajole you into calculating FCSI for any of the items on the list in comment 121 above, huh?" That would be absurd. There are in eukaryote multi-celled organisms several thousand genes and in humans about 20,000-25,000. The improbability of the average one exhausts the physical resources of the universes through random self assembly. So the anti ID people have to either show that the particular gene, while improbable, is just the lucky winner of a lottery of assembling amino acids or nucleotides by accident and for that to have meaning, there must be untold functional proteins. However, some have estimated that functional proteins are about 1 in about 10^80 or in other words very rare. Then there is the attempt to say that there is a strong affinity from certain nucleotide sequences in an RNA polymer and a specific amino acid. That is once tRNA assembles it will have a natural affinity to a specific amino acid. This still does not explain what then assembles the appropriate amino acid and it assumes the RNA world was somehow functionally operative and this chemical affinity then drove amino acids to have a similar affect. Unbelievably speculative but that is what they are left with at the moment to claim ID is mumbo jumbo. Obviously, once a gene arises, it is not the same issue but how did these 20,000+ genes arise in the first place when most of them defy the probability resources of the universe let alone the planet. So to get an estimate of the complexity, take the average gene and calculate the population of genes from which it could have come and then factor in such things as there is often more than one codon for each amino acid and also that some amino acids are fungible for one or two others in certain situations so that the same function could arise via different gtenes. The number would still be incredibly highly improbable given all these considerations. And then it has to be repeated again and again for the other genes. Now some of the other genes could have arisen from mutations of an already existing gene and the subsequent protein may be functional in some sense. However, some believe that finding a unique gene this way would again be like starting all over again and be beyond the probabilistic resources of the universe. To see what is meant by estimating these probabilities, here is an article that just appeared in the last month. http://www.tbiomed.com/content/6/1/27 And here is someone else's set of links on the calculating the improbability of a gene - functional protein relationship (V. J. Turley from this site). (1) “The Capabilities of Chaos and Complexity” by Dr. David Abel, in International Journal of Molecular Sciences, 2009, 10, pp. 247-291, at http://mdpi.com/1422-0067/10/1/247/pdf?(2) “Measuring the functional sequence complexity of proteins” by Kirk Dunston, David Chiu, David Abel and Jack Trevors, in Theoretical Biology and Medical Modelling, 2007, 4:47 at http://www.tbiomed.com/content/pdf/1742-4682-4-47.pdf and?(3) “Intelligent Design: Required by Biological Life?” by Dr. K. D. Kalinsky at http://www.newscholars.com/papers/ID%20Web%20Article.pdf . And don't forget Hazen who is anti ID and also working with these concepts. Sorry for the delay but had to go to a concert and then watch the Jet's game. How about those Jets.jerry
January 17, 2010
January
01
Jan
17
17
2010
07:51 PM
7
07
51
PM
PDT
Timeaus @137,
Whether or not nature is engineered is not a question that engineers need to address in order to do their work.
This is true, and it's refreshing to hear an IDist say such a thing. I'm used to hearing specious claims that engineers do what IDist do or that IDists do what engineers do. I think that's because there's a tactical problem for IDists if they acknowledge that engineering disregards ID and that engineering does so for practical reasons (not as a matter of atheism.) The problem is that it raises questions as to why scientists doing science should be any different. If methodological materialism is good enough for engineering then why isn't it good enough for science? (Scientists don't expect to discover Ultimate Truth any more than engineers expect to build the Ultimate Airplane or the Ultimate Computer.) This would undermine IDists' claim their work has been rejected due to ideology rather than due to a lack of useful results. Engineers build and maintain systems without investing any time investigating whether or not their system has been, or will be, affected by non-material intelligences. But then IDist engineers get on their blogs at night and berate scientists for behaving the same way when doing science. There are indeed engineers working with biologists, today, in the field of Systems Biology. They describe their work as model-building. This is what engineers and scientists do -- build models -- and ID offers no help. Are these engineers personally inspired to believe that some unspecified non-material intelligence must have done some unknown thing at some unknown time for some unknown reason to bring about biological systems? I would guess that some are and some aren't. But I'm very confident that you won't see an intelligent agent in their models of nature, regardless of their religious beliefs.
Thus, the premise that complex integrated systems don't arise by chance is a premise in line with an engineer's instinct, ...
You don't appear to understand what it means when some aspect of nature is modeled as a random variable. It doesn't mean that the modelers claim that that aspect happens accidentally, or that there is some Cosmic Random Event Generator (i.e., that it "arises by chance"). It isn't even a denial that there could possibly be some Cosmic Event Chooser operating beneath the model's level of detail. It means that the modeler can do no better than to model that aspect as a random variable, either due to a lack of information, or as a simplifying assumption. I think that the general population of engineers understands this. [But I don't claim, as you do, to be able to see what's in their bones or to be able to hear the whispers of their instincts.] Furthermore, I expect that engineers, like scientists, would seek, as much as possible, to model nature in terms of regularities (which you don't mention.) Leave it to an IDist to describe the world in terms of chance versus design. Comparing ID with an obviously practical field like engineering illuminates ID as an exercise in apologetics. But perhaps that's not an issue for you. Your primary hope for the future is not that we will have some better model of the history of life, but that there will be more widespread belief in design.Freelurker
January 17, 2010
January
01
Jan
17
17
2010
07:50 PM
7
07
50
PM
PDT
As for worked examples of CSI- may I inquire as to the worked examples of blind, undirected processes? I mean it's only been 150 years since Darwin's publication and all you can show us is disease and oddities.Joseph
January 17, 2010
January
01
Jan
17
17
2010
06:56 PM
6
06
56
PM
PDT
Can someone, anyone, tell me the relevance of "calculating the CSI of the listed organisms"? The reason I ask is that isn't even correct terminology. The calculation was to see how many bits of specified information met the UPB. Now all you have to do is count the bits of specified information in the object under investigation- if there is any.Joseph
January 17, 2010
January
01
Jan
17
17
2010
06:54 PM
6
06
54
PM
PDT
Jerry:
Oh, I can make a calculation but I am not an expert.
So, you won't?
I suggest you google, Hazen, Abel, Kalinsky.
Well, there were a total of 4 links in 10 pages of results. I found one link to an actual article, but it didn't have any calculation of CSI (or FCSI) for any real biological entity. It might be easier to point me to the calculation you are thinking of.
If you are trying to figure it out, go by this rule. Those who are anti ID have only one purpose, try to catch an pro ID person in something they said that is wrong or they can not back up.
I guess then there is no point in me trying to cajole you into calculating FCSI for any of the items on the list in comment 121 above, huh?efren ts
January 17, 2010
January
01
Jan
17
17
2010
01:57 PM
1
01
57
PM
PDT
"If you aren’t the person who is capable of calculating FCSI, then perhaps you could let us know who is and we can wait for them to start the discussion." Oh, I can make a calculation but I am not an expert. I suggest you google, Hazen, Abel, Kalinsky. " I am new to ID and here trying to figure it out." If you are trying to figure it out, go by this rule. Those who are anti ID have only one purpose, try to catch an pro ID person in something they said that is wrong or they can not back up. And never admit the someone from the other side has a point. The other side keeps answering their questions till they get fed up. You say you are new but you already know about FCSI. That is a start. FCSI is nothing more than the transcription/translation process.jerry
January 17, 2010
January
01
Jan
17
17
2010
12:21 PM
12
12
21
PM
PDT
Jerry:
We are waiting on you.
Say what? I am new to ID and here trying to figure it out. Since you have stated elsewhere that you have been interested and involved in ID for over 10 years, I figured you were experienced in calculating FCSI for biological things. If you aren't the person who is capable of calculating FCSI, then perhaps you could let us know who is and we can wait for them to start the discussion.efren ts
January 17, 2010
January
01
Jan
17
17
2010
11:18 AM
11
11
18
AM
PDT
"I’ve been checking back in on this thread periodically. So far, no one has gotten started on ordering this list by complexity/FCSI. Is there someone in particular we are waiting on?" We are waiting on you. Why don't you start by saying what you think might be the dimensions of complexity and which would be most appropriate to analyze.jerry
January 17, 2010
January
01
Jan
17
17
2010
11:00 AM
11
11
00
AM
PDT
backwards me:
Can you tell me a single thing about how you think the mechanisms were “guided”
Dr. Spetner discussing transposons:
”The motion of these genetic elements to produce the above mutations has been found to a complex process and we probably haven’t yet discovered all the complexity. But because no one knows why they occur, many geneticists have assumed they occur only by chance. I find it hard to believe that a process as precise and well controlled as the transposition of genetic elements happens only by chance. Some scientists tend to call a mechanism random before we learn what it really does. If the source of the variation for evolution were point mutations, we could say the variation is random. But if the source of the variation is the complex process of transposition, then there is no justification for saying that evolution is based on random events.”
It appears transposons carry the sequence(s) for some of enzymes required for them to jump around- cutting and splicing. Think targeted search- an excellent design mechanism demonstrated to get the results expected.Joseph
January 17, 2010
January
01
Jan
17
17
2010
08:29 AM
8
08
29
AM
PDT
efren ts at 155, I’ve been checking back in on this thread periodically. So far, no one has gotten started on ordering this list by complexity/FCSI. Is there someone in particular we are waiting on? I've also been monitoring this thread since I'm very interested in learning how to calculate CSI for real biological systems. Thus far, I have been unable to find any worked examples. From the list, I think these two: bacteria – Lenski latest bacteria – Lenski start would be particularly interesting to measure. I would also be interested in seeing the calculation for a small virus like AAV. The calculation for a snowflake or The Giant's Causeway would be good future additions to the list. Just a worked example of calculating CSI, as described in No Free Lunch for a real world biological system or component, taking into account known evolutionary mechanisms, would be fantastic, though.Mustela Nivalis
January 17, 2010
January
01
Jan
17
17
2010
08:19 AM
8
08
19
AM
PDT
1 7 8 9 10 11 14

Leave a Reply