The word “borked” has entered our lexicon as a result of the treatment Judge Robert Bork received during Confirmation Hearings for Supreme Court Justice of the U.S. To put it mildly, he was not treated very well. When Stephen J. Gould came out with his theory of Punctuated Equilibria, he, too, was not treated well by the Darwinian establishment until such time as he made clear that his theory was firmly a part of Darwinian thought.
Now another geologist, Michael Rampino, has just set himself up for equal treatment. In a PhysOrg entry, Rampino points out what has been so obvious for so long a time: evolution is NOT gradual! It is episodic. He also seeks to go further back in time to Patrick Matthew, who predates Darwin and his notion of NS by thirty years or so. I think Michael has been reading far too much here at UD for his own good health (academic, anyway). It’ll be interesting to see how quickly he is gobbled up by the Darwinian thought police.
Pav: Do you mean to say “To put it mildly, he was NOT treated very well.”?
Unless you are being ironic, in which case most non-Americans will not pick up your true intent.
He’ll get Goulded or Sgoulded.
SCheesman:
I did put it mildly purposively knowing that a person’s political persuasion will temper their view of the treatment Bork received accordingly. But, yes, it was atrocious behavior on the part of the Dems.
PaV, Cheesy means you left the word “not” out.
Jerry Coyne’s already calling for a magazine boycott, it seems.
One thing to recognize here is the compelling need for evolution to be slow and gradual for purely professional reasons. If it is slow and gradual, all kinds of averaging assumptions can be made. Then even not so smart scientists can make educated guesses about mutation rates and models and about what specie appeared when, such as calculating when two branches shared a common ancestor. This allows for the generation of more papers because more conclusions can be squeezed out of the little data that available.
If it evolution is chaotic, there really is not much modeling that can be done. The theory would end up not being able to make any quantifiable predictions, since the rates of change are unpredictable. Not only would this make it impossible for people to predict rates, but it would make the problem much more intractable. The result is fewer scientific papers.
“the LITTLE data that is available.”
what might you be speaking of here? Obviously not the amount of fossils available?
“If it evolution is chaotic”
hmmm: “episodic” – “chaotic” – practically the same thing…
Actually I think life itself, as to when it was introduced on earth, especially bacterial life, in conjunction with gradual, and ‘precise’, geological processes, had more of a direct impact on transforming (terra-forming) the overall environment of the earth, than any of the temporary catastrophes on earth had to contributing to the transforming of life.,,, Myself I look at it as the ‘Designer’ wiping the blackboard clean to get ready for the next stage of introducing life upon the face of the earth:
notes:
Microbial life can easily live without us; we, however, cannot survive without the global catalysis and environmental transformations it provides. – Paul G. Falkowski – Professor Geological Sciences – Rutgers
Since oxygen readily reacts and bonds with many of the solid elements making up the earth itself, and since the slow process of tectonic activity controls the turnover of the earth’s crust, it took photosynthetic bacteria a few billion years before the earth’s crust was saturated with enough oxygen to allow a sufficient level of oxygen to be built up in the atmosphere as to allow higher life:
New Wrinkle In Ancient Ocean Chemistry – Oct. 2009
Excerpt: “Our data point to oxygen-producing photosynthesis long before concentrations of oxygen in the atmosphere were even a tiny fraction of what they are today, suggesting that oxygen-consuming chemical reactions were offsetting much of the production,”
http://www.sciencedaily.com/re.....141217.htm
Increases in Oxygen Prepare Earth for Complex Life
Excerpt: We at RTB argue that any mechanism exhibiting complex, integrated actions that bring about a specified outcome is designed. Studies of Earth’s history reveal highly orchestrated interplay between astronomical, geological, biological, atmospheric, and chemical processes that transform the planet from an uninhabitable wasteland to a place teeming with advanced life. The implications of design are overwhelming.
http://www.reasons.org/increas.....mplex-life
Evidence of Early Plate Tectonics
Excerpt: Plate tectonics plays a critical role in keeping the Earth’s temperature constant during the Sun’s significant brightness changes. Almost four billion years ago, the Sun was 30 percent dimmer than it is today, and it has steadily increased its light output over the intervening period. This steady increase would have boiled Earth’s oceans away without plate tectonics moderating the greenhouse gas content of the atmosphere.
http://www.reasons.org/evidenc.....-tectonics
Rich Ore Deposits Linked to Ancient Atmosphere – Nov. 2009
Excerpt: Much of our planet’s mineral wealth was deposited billions of years ago when Earth’s chemical cycles were different from today’s.
http://www.sciencedaily.com/re.....193640.htm
Interestingly, while the photo-synthetic bacteria were reducing greenhouse gases and producing oxygen, and metal, and minerals, which would all be of benefit to modern man, ‘sulfate-reducing’ bacteria were also producing their own natural resources which would be very useful to modern man. Sulfate-reducing bacteria helped prepare the earth for advanced life by detoxifying the primeval earth and oceans of poisonous levels of heavy metals while depositing them as relatively inert metal ores. Metal ores which are very useful for modern man, as well as fairly easy for man to extract today (mercury, cadmium, zinc, cobalt, arsenic, chromate, tellurium and copper to name a few). To this day, sulfate-reducing bacteria maintain an essential minimal level of these heavy metals in the ecosystem which are high enough so as to be available to the biological systems of the higher life forms that need them yet low enough so as not to be poisonous to those very same higher life forms.
Bacterial Heavy Metal Detoxification and Resistance Systems:
Excerpt: Bacterial plasmids contain genetic determinants for resistance systems for Hg2+ (and organomercurials), Cd2+, AsO2, AsO43-, CrO4 2-, TeO3 2-, Cu2+, Ag+, Co2+, Pb2+, and other metals of environmental concern.
The role of bacteria in hydrogeochemistry, metal cycling and ore deposit formation:
Textures of sulfide minerals formed by SRB (sulfate-reducing bacteria) during bioremediation (most notably pyrite and sphalerite) have textures reminiscent of those in certain sediment-hosted ores, supporting the concept that SRB may have been directly involved in forming ore minerals.
http://www.goldschmidt2009.org...../A1161.pdf
Man has only recently caught on to harnessing the ancient detoxification ability of bacteria to cleanup his accidental toxic spills, as well as his toxic waste, from industry:
What is Bioremediation? – video
http://www.youtube.com/watch?v=pSpjRPWYJPg
Metal-mining bacteria are green chemists – Sept. 2010
Excerpt: Microbes could soon be used to convert metallic wastes into high-value catalysts for generating clean energy, say scientists writing in the September issue of Microbiology.
http://www.physorg.com/news202618665.html
The Creation of Minerals:
Excerpt: Thanks to the way life was introduced on Earth, the early 250 mineral species have exploded to the present 4,300 known mineral species. And because of this abundance, humans possessed all the necessary mineral resources to easily launch and sustain global, high-technology civilization.
To put it mildly, this minimization of poisonous elements, and ‘explosion’ of useful minerals, is strong evidence for Intelligently Designed terra-forming of the earth that ‘just so happens’ to be of great benefit to modern man.
Clearly many, if not all, of these metal ores and minerals laid down by these sulfate-reducing bacteria, as well as laid down by the biogeochemistry of more complex life, as well as laid down by finely-tuned geological conditions throughout the early history of the earth, have many unique properties which are crucial for technologically advanced life, and are thus indispensable to man’s rise above the stone age to the advanced ‘space-age’ technology of modern civilization.
Inventions: Elements and Compounds – video
http://videos.howstuffworks.co.....-video.htm
Bombardment Makes Civilization Possible
What is the common thread among the following items: pacemakers, spark plugs, fountain pens and compass bearings? Give up? All of them currently use (or used in early versions) the two densest elements, osmium and iridium. These two elements play important roles in technological advancements. However, if certain special events hadn’t occurred early in Earth’s history, no osmium or iridium would exist near the planet’s surface.
http://www.reasons.org/Bombard.....onPossible
Engineering and Science Magazine – Caltech – March 2010
Excerpt: “Without these microbes, the planet would run out of biologically available nitrogen in less than a month,” Realizations like this are stimulating a flourishing field of “geobiology” – the study of relationships between life and the earth. One member of the Caltech team commented, “If all bacteria and archaea just stopped functioning, life on Earth would come to an abrupt halt.” Microbes are key players in earth’s nutrient cycles. Dr. Orphan added, “…every fifth breath you take, thank a microbe.”
Planet’s Nitrogen Cycle Overturned – Oct. 2009
Excerpt: “Ammonia is a waste product that can be toxic to animals.,,, archaea can scavenge nitrogen-containing ammonia in the most barren environments of the deep sea, solving a long-running mystery of how the microorganisms can survive in that environment. Archaea therefore not only play a role, but are central to the planetary nitrogen cycles on which all life depends.,,,the organism can survive on a mere whiff of ammonia – 10 nanomolar concentration, equivalent to a teaspoon of ammonia salt in 10 million gallons of water.”
As well, many types of bacteria in earth’s early history lived in what are called cryptogamic colonies on the earth’s primeval continents. These colonies dramatically transformed the primeval land into stable nutrient filled soils which were receptive for future advanced vegetation to appear.
CRYPTOBIOTIC SOIL –
Excerpt: When moistened, cyanobacteria become active, moving through the soil and leaving a trail of sticky material behind. The sheath material sticks to surfaces such as rock or soil particles, forming an intricate web of fibers throughout the soil. In this way, loose soil particles are joined together, and an otherwise unstable surface becomes very resistant to both wind and water erosion.
Bacterial ‘Ropes’ Tie Down Shifting Southwest
Excerpt: In the desert, the initial stabilization of topsoil by rope-builders promotes colonization by a multitude of other microbes. From their interwoven relationships arise complex communities known as “biological soil crusts,” important ecological components in the fertility and sustainability of arid ecosystems.( Of note: Phylogenetic analyses performed by the researchers have further shown that the evolution of the trait occurred separately in three different genera; an example of “convergent evolution” (read evolutionary miracle story), rather than a tie to a single common rope-building ancestor.)
Not only would this make it impossible for people to predict rates, but it would make the problem much more intractable. The result is fewer scientific papers.
I’d accept this, except it seems to assume that scientists aren’t willing to utterly make blind guesses whenever they need or want to.
Barry:
Thanks. I made the correction. When I posted, I had literally five minutes to write it up and post.
molch:
The only thing I can think when reading your reply is that you meant it with snark. SO I will make some assumptions about you when I reply.
1. You are an ardent evolutionist.
2. You think you are smarter than ID supporters.
3. You think ID arguments are at best illogical.
If these assumptions are wrong, I apologize in advance.
Maybe you don’t think these things, but the sarcastic nature of your reply made me think that these were what you meant.
I think you did not understand my reply.
Consider fossils. Yes there are a ton of fossils. But, most fossils, ( as they obviously should be independent of whether evolution is true or not ) are of animals which belong to what is assumed to be a long lived stable specie. There is VERY LITTLE fossil evidence that, without inventing a whole list of unseen transitions, supports transitions between species. The story must be pieced together from very different forms. So yes there is VERY LITTLE evidence for gradual transitions between forms.
Second if “evolution is episodic” it consists of bursts of evolution not necessary periodic in nature. Thus trying to define an “average” rate of change is highly dependent upon the time period selected. Now I will admit, that the episodic nature of evolution does not by itself point out that the mechanism of evolution must be a chaotic process. It may be that evolutionary rates show some dependence upon another variable that has a chaotic behavior. Either way, the episodic nature of evolution points to an underlying dependence on some chaotic process. It is much, much harder to make sense of chaotic processes.
I would appreciate some of your thoughts in reply with a little less snark.
molch,
Are you going to answer my question about what you object to the historical record of Jesus?
@PaV
so, you have lost your interest in bees?
What looks like a sudden episodic event in geological time, can actually be rather gradual in terms of our ordinary conception of time. I would guess that this will be pointed out to Rampino, but I doubt that he will otherwise get a lot of heat. Of course, I could be mistaken.
I’ve mentioned in another thread, that my own view of evolution is based in niche change, and what Rampino is reporting fits right in with that.
I love Dawkins on “gradualism.”
“A key feature of evolution is its gradualness. (This is a matter of principle rather than fact) It may or may not be the case that some episodes of evolution take a sudden turn.” River Out of Eden, page 83.
“Evolution is very possibly not, in actual fact, always gradual. But it must be gradual when it is being used to explain the coming into existence of complicated, apparently designed objects, like eyes. For if it is not gradual in these cases, it ceases to have any explanatory power at all. Without gradualness in these cases, we are back to miracle, which is simply a synonym for the total absence of explanation.” Also River Out of Eden. Sorry, can’t find page # right now.
Translation, gradualism is a matter of dogma, not reason or data. It is what it needs to be regardless of the facts. Wow. Now there’s some intellectual integrity for you… What a joke these people are. No, really.
tgpeeler the page number is 83
http://www.trueorigin.org/edentj2.asp
Thanks. Will update notes. 🙂
BA77 – thanks. You are correct. Found this howler on p. 84.
“Gradual evolution by small steps, each step being lucky but not too (italics in original) lucky, is the solution to the riddle. But if it is not gradual, it is no solution to the riddle: it is just a restatement of the riddle.
There will be times when it is hard to think of what the gradual intermediates may have been. These will be challenges to our ingenuity, but if our ingenuity fails, so much the worse for our ingenuity. It does not constitute evidence that there were no gradual intermediates.”
How hilarious is this??? THIS is what we’re up against, but we’re the irrational true believers? Right.
tgpeeler,
I like this statement of Dawkins:
‘There will be times when it is hard to think of what the gradual intermediates may have been.”
Actually I made a very ‘conservative’ video that shows a very small taste of what sort of craziness we should expect, to see in the fossil record and in existing life, if the small step ‘experimental’ Darwinian method were actually true:,,,
What Would The World Look Like If Darwinism Were True – video
http://www.tangle.com/view_vid.....e70c6fe1ee
,,, actually the failed experiments would be far worse than what I illustrated in the video. I feel the point of the video is very important to realize,,, and even one that Darwin himself recognized a serious argument against his theory:
“Why, if species have descended from other species by fine gradations, do we not everywhere see innumerable transitional forms? Why is not all nature in confusion, instead of the species being, as we see them, well defined? But, as by this theory innumerable transitional forms must have existed, why do we not find them embedded in countless numbers in the crust of the earth? But in the intermediate region, having intermediate conditions of life, why do we not now find closely-linking intermediate varieties?”
Charles Darwin – Origin Of Species
,,, I would really like to see someone with some real talent as to making videos make a descent video as to the confusion that we should naturally expect to see. in the fossil record and in life, if Darwinism were actually true,,, so as to clearly drive the point home of just how far off the mark is as to what we should expect to see,:
I love Dawkins because he says things as they are.
Take this, for instance:
“Gradual evolution by small steps, each step being lucky but not too (italics in original) lucky, is the solution to the riddle. But if it is not gradual, it is no solution to the riddle: it is just a restatement of the riddle.”
That’s absolutely true! It’s exactly the same thing as what I have often stated, that unless darwinists are able to deconstruct complex functions into simple functional selectable steps, their theory is not theory at all. So, Dawkins would be a very good IDists.
And this:
“There will be times when it is hard to think of what the gradual intermediates may have been. These will be challenges to our ingenuity, but if our ingenuity fails, so much the worse for our ingenuity. It does not constitute evidence that there were no gradual intermediates”
This is a wonderful act of pure faith. It is beautifully unfalsifiable, so I suppose that for those strict popperians out there it should obviously not be science. What better evidence of the strictly religious nature of darwinism?
By the way, there is a problem I have always wondered about, and which is rarely discussed, even here.
It could be defined: “the problem of the billions of molecular missing links”.
Shortly, it goes this way:
Let’s suppose that species “b” derives from species “a” through darwinian mechanisms.
Let’s suppose, for simplicity, that the main difference between the two is the emergence of a new protein, “B”, form a pre-existing protein, “A”.
Let’s say that “B” is very different from “A” (maybe a new protein domain), so that the transition from one to the other qualifies as a dFSCI transition. For instance, the transition could require 200 bits of functional information.
Such a transition cannot happen through mere RV.
Therefore, our darwinian friends have to deconstruct the transition into intermediate functional steps, just to be credible, so that NS cam enter the scenario.
Let’s pretend they succeed (they never have, but just for discussion…).
So, the transition from “A” to “B” has been deconstructed into, say, 20 intermediates, each of them functional and selectable. No single intermediate transition is so complex as to configure dFSCI at a 150 bit threshold, so in theory each transition could have happened by chance, and then been selected.
Well, my simple question is:
Why today, in the existing proteome, we do find protein “A” (in the progenitor species) and protein “B” (in the derived species), but no instance of any of the 20 intermediates?
Let’s remember that each of those intermediates is suppose to be “more functional” of its ancestor, and has been selected and expanded in the process. So, why have all of them disappeared?
A simple question. No simple answer, I believe. Maybe no answer at all.
I know, someone will come and say that obviously those intermediate expanded in specific “niches”, and have been completely substituted by its derivatives, while the original protein “A” has survived in another “niche”, and therefore we find it today.
Well, I will not comment on this “russian nesting niches” model, I prefer proper fairy tales.
So, multiply the question for all existing functional proteins, and you will have a good number of “molecular missing links”.
OK, maybe not billions, but anyway…
#20 gpuccio
Let’s think about this. You believe in common descent right? But do you believe
1) The offspring is always very similar to the parent i.e. within the scope of DNA recombination and mutation.
or
2) Just occasionally (but often enough to account for the millions of species that exist and have existed) the offspring is dramatically different from the parent.
If (1) then the issue you raise is a problem whether you believe the variation is guided or unguided. The intermediate stages from A to B must have existed and must have been viable. It leads to interesting questions about what happened to intermediate forms – but it applies to any one who holds (1) – who might well be teleological.
If (2) then there are all sorts of issues e.g:
(a) In all the millions of reproductions that humanity has witnessed we have no reliable reports of anything other than gradual change.
(b) If an organism is significantly different from its parents that in itself makes unviable. Typically organisms thrive in an environment which includes the other members of their species and complex organisms depend on complex relationships with their close relatives. If a dog gives birth to some new species which is dramatically different processes such as the placenta, birth, suckling, development, learning of effective behaviour, relationships with siblings, are all going to make its prospects very poor.
I guess in the end it is a question of which you find more plausible – that intermediate organisms died our or that just occasionally stunningly different organisms are born and prove to be viable.
GP, your Dawkins quotes….are they from River Out of Eden?
Upright BiPed:
I suppose they are, but indeed I took them from tgpeeler’s post #18 here.
Ah….got it.
It is a beautiful statement of non-falsifiability.
mark:
a very interesting post from you.
Let’s see. While I suppose that both 1) and 2) are possible scenarios, and that only facts will help us choose between them, I must confess that at present I would favour something more similar to 2).
“1)” is made difficult by both the complete absence of molecular intermediates and the “punctuated” nature of the fossil record.
“2)” could come in different variants. You must remember that, in my scenario, variations are designed, and they could really be actuated in a “niche” (a natural “lab”?), possibly building up a new population in a “short” time (which as far as we know could well be of a few millions of yeras, but even of a few days), and in a protected condition. That’s what design allows you to realize.
It’s true that, as you say:
“In all the millions of reproductions that humanity has witnessed we have no reliable reports of anything other than gradual change.”
But it is equally true that we have no reliable report of any macroevolutionary event, too.
Finally, your considerations, although interesting, are maybe more appropriate to an argument based on fossils. But my argument was molecular. That makes some difference.
I will be more clear. In my (very theoretical) example, only when you reach “B” you have a new species. The idea is that the 20 intermediates from “A” (A1, A2 etc.), are only different forms of protein “A” in the species “a”, only they give a selectable advantage to the species. So, species “a1” will be the same species as “a”, but the descendants with the molecular form “A1” will have expanded ro a very large part of the population, so that they can beget the new variation to protein “A2”, and so on. Finally, protein B2, when reached, changes dramatically (or non dramatically) the species in a way that is not only molecular, but morphological too.
So, I repeat my question: if in the present world we still have species “a” and species “b”, and if in species “a” we can find only the starting protein “A”, where did all the 20 intermediates which brought molecularly from “A” to “B” go? Why cannot we find any trace of them in the present proteome?
As you can see, reasoning in terms of molecular variation (which is obviously the real stuff) changes things a lot.
We observe completely isolated protein superfamilies in the proteome. Darwinists state that the newer ones came from the older ones through more functional molecular intermediates which were selected and expanded.
So, why cannot we find any trace of those molecular intermediates in the proteome?
I believe that this is a very simple and good question, and I would appreciate if you tried to answer it.
mark:
An important point I forgot to mention.
In a designed transition, the intermediates need not be selectable. They don’t even need to be functional.
For instance, the intermediates could be gradually built on a duplicated gene which is not even translated (a non functional duplicate), until in time the new funtional protein is achieved, and can be integrated in a new context.
If the variations are selected artificially, there is no special need for expansion. That can explain why eventual intermediates are not kept.
But with the darwinist scenario, all that is not possible.
Gpuccio #25 and #26
I am not the best person to answer this as I don’t know enough about the underlying biology. But here a couple of thoughts – both should be preceded by “As I understand it” to reflect my limited knowledge.
(1) At the molecular level a very large change may be the result of a single mutation or recombination e.g. an inversion or the insertion of a piece of viral DNA. If this has limited affect on the phenotype then there will be no intermediate but the two viable variations with very different protein structures.
(2) Suppose mutation takes place in an isolated population culminating in something that gives a significant advantage in that environment. This will eliminate the intermediate forms through competition in the isolated population. Meanwhile the original population will probably have mutated in a different way – probably improving its fitness for a different environment and eliminating intermediate forms through competition. If the barrier is later removed the so the original population ovelaps the isolated population and there are niches for both – then there will be two very different molecular forms and no intermediates. I believe this is how speciation is reckoned to happen.
gpuccio, this might interest you. A student is critiquing Dr. Behe’s ‘edge of evolution’ at BioLogos:
A Student’s Review of Behe’s “Two Binding Site Rule”
http://biologos.org/blog/a-stu.....site-rule/
I would be interested in your thoughts on her paper.
Mark:
Indeed, there is no need to know a lot of biology to discuss this subject.
Regarding your points:
(1) A single mutation will never lead from one protein superfamily to another. Protein superfamilies as defined in SCOP are clearly isolated at the primary sequence level (they have completely different primary sequences, with no detectable significant homology). Therefore, a big change is necessary to go from one to another one. A transition which, according to Durston’s data, is in most cases beyond my personal threshold for dFSCI (150 bits). IOWs, a transition which cannot occur randomly.
Let’s go to the possibility that one single variation event (inversion, frameshift mutation, or similar) may change completely one sequence into a different one. That is certainly possible. Frameshift mutations are perhaps the best example.
In a frameshift mutation, a single aminoacid variation is the cause of a completely different reading of the existing information. The whole sequence changes.
And so? From a probabilistic point of view, nothing changes. This is a simple point that darwinists have difficulties to understand.
Let’s be more clear. You start from protein A and the final result is protein B. Both are functional, but they belong to different superfamilies, and are therefore unrelated at primary sequence level.
That means that, to get B, most of the aminoacids in A must change. Let’s say that at least 50 aminoacids have to change exactly, which means a transition of 216 bits.
As A is unrelated to B, any change can be considered as a random attempt at finding B. IOWs, the transition from A to B is a random walk.
There is no significant difference if you change one AA at a time, or if you just “try” a completely different sequence. The probabilities of success remain practically non existent.
The proff of that is that no frameshift mutation is known which has generated a new functional protein.
As you may know, darwinists have blindly believed for years that such an example existed: Ohno’s theory about the emergence of nylonase. But obviously, serious research has clearly shown that such a theory was wrong.
So, a sudden transition has no possibility to find a new isolated island of function, because it does not allow even the theoretical possibility that functional intermediates may make the transition possible.
Next point in next post.
Mark:
(2): Your scenario would be more or less as follows:
In the beginning, we have species “a” with protein “A”. A first mutation occurs and transforms “A” into “A1”, the first functional intermediate. However, the species is still the same (“a” will survive to our times), but for some reason the clone with the selective advantage (“a1”) becomes isolated, and expands to a fairly big population without mixing with “a”. In this new “semi-big” population (smaller than “a”, I suppose, as it should exist in some special “niche”), a second useful mutation can occur, bringing to “a2”. This time, not only “a2” does not become isolated from “a1”, but indeed in time it completely expands, eliminating “a1” (while “a” continues to thrive, beyond the providential barriers).
The next 18 transitions behave all like the last, with complete erasing of the previous population and no mixing with the original population.
Finally, whem “b” is achieved, the barriers go down, and “a” and “b” can mix: but “b” is now another species, so they can go on their separate way up to the present time.
Is that correct?
Well, have you ever heard of “ad hoc” interpretations?
However, let’s say that such a scenario happened. The little problem is that it did not happen once. It happened in every single case where a new protein superfamily, IOWs a new basic functional island, appeared in natural history.
That is no more “ad hoc interpretation”. That is pure folly!
BA:
I have read the linked “paper”, and I believe it is a sequence of serious errors and misunderstandings.
Unfortunately, I have not at hand my copy of TEOE, so I cannot comment with precision about Behe’s words.
I will anyway point shortly to some obvious and serious errors in the “arguments”.
1) Behe’s point in TEOE is mainly to establish by empirical arguments an edge to what random variation can accomplish. His argument about the rarity of two aminocid coordinated mutations is derived from observations of data about the malaria parasite, and in no way from “statistical calculation”. Behe just states the the observed data are in good accord with what statistical calculations would expect.
The empirical result is that, in the malaria antibiotic resistance model, single mutations are frequent, double mutations are exceedingly rare.
This is an empirical jugdement. I don’t understand why many darwinists, including the student who wrote that paper, can’t apparently understand this simple fact.
2) The other important error is again a veru common mistake darwinists make. I quote:
Another of Behe’s central claims is that mutations cannot occur one by one over a long period of time, but rather must all take place at once. This idea is founded on an incomplete understanding of how random mutation and natural selection actually function. As long as each genetic mutation confers some type of survival or reproductive advantage, or at least causes no harm to the cell, changes can occur one at a time and gradually produce a significant alteration. The mutations do not have to occur simultaneously. In fact, isolated mutations occur on a relatively regular basis.
Here the author is really confused. Moreover, he is conflating two very different “arguments” (both of them wrong).
The first is the usual confusion about coordinated mutations. Darwinists seem completely unable to get this simple point.
To be more clear, I will give here an explicit definition of what I call “coordinated mutation”:
“A mutation of two or more AAs is coordinated if all of the aminoacids have to change to a new specific value for a new function to emerge”.
IOWs, if I say that a new function requires a coordinated mutation of two aminoacids, that means that both specific mutations must be present for the new function to emerge.
In no way that means that the two mutations must happen “at the same time”. But they must be present “at the same time” for the function to be there.
So, there is no difference if first mutation occurred one million years ago, and the second happens now. Or if the two mutations happen in the same day.
If both mutations are not individually functional (IOWs, if they are at nest “neutral”), the probabilities are the same. We have to multiply the individual probabilities for each mutation.
Obviously, if any of the individual mutations is “negative”, the situation is even worse.
And here we come to the second aspect of this point. The author first states that Behe would say that mutations “must all take place at once” (which is wrong). Then makes things worse by adding that the reason for that is that Behe does not understand how natural selection works (which is even more wrong).
But the problem is not that “Behe does not understand”. The problem is that Behe is searching the edge of what RV and NS can really do. So, the concept that functions which require “two coordinated mutations” are exceedingly rare, as proven by the malaria model, is obviously true for functions where the “intermediates” are at best neutral, and cannot be selected (which, by the way, is absolutely the general case).
Let’s take for instance Behe’s example of antibiotic resistance in the malaria parasite. In most cases, a single mutation confers resistance, and is selected. That is well understood by Behe, who indeed discusses those cases in detail.
But for cloroquine resistance, the situation is different: two mutations are required, at least, and none of them is individually selectable. That’s why the emergence of that resistance is exceedingly rare, when compared the other drugs.
The author seems to completely ignore or misunderstand this argument, which is the most important in Behe’s book.
3) Finally, another important and common error. It is certainly true that, if mutations happen in a duplicated gene, which is not functional, we can bypass the effect of negative selection on negative mutations. That is certainly a point.
But, as you cannot have it both ways, not even if you are a darwinist, it is equally true that a duplicate, non functional gene cannot even benefit of positive selection (at least until the new functional configuration is attained, and the gene is supposed to gloriously reenter the transcribed state, and integrate itself magically in the pre-existing environment).
So, no selection in duplicated non functional genes: neither negative, nor positive.
I don’t believe that’s good news for darwinists. That simply means that any complex functional variation will never happen, that functional intermediates are not possible, and that all the scenario is a silly myth.
Thanks gpuccio for bringing clarity, Do you have any comment on her ‘designed’ protein side-chain argument that she used?? i.e.
here is the abstract:
Designed Protein-Protein Association
http://www.sciencemag.org/cgi/.....9/5860/206
That is the one study that caught my eye gpuccio. Though it is clearly a designed change, I was wondering exactly how far they were able to push the change before they ran into a road block,,,
BioLogos also has two other students who are to present their critiques of ‘The Edge’ in the near future. Hopefully Dr. Behe will address them on ENV, or here on his blog on UD, but if not I may be back to ask you your opinion again 🙂
#30 Gpuccio
I am sorry. I was not clear enough.
My model goes like this. They key difference is that there is no mutation required before isolation. Isolation might for example be geographic.
Stage (1).
We have a species a with protein A. A subset of that species gets genetically isolated for some reason i.e. there is no gene flow between this subset and the rest of the species. It does not have to be particularly large- in fact the process is likely to happen quicker in a small founder community. Orginally it shares the same gene pool as the main species, but because it is isolated it is almost certain to start diverging – the environment may be different or may simply be genetic drift. There is nothing freakish about this process. Subsets of species get genetically isolated all the time and some divergence would almost inevitably happen – we see this in small human communities.
Stage (2).
At some stage the isolated population after extensive divergence which gives little fitness advantage stumbles upon a mutation which gives significant fitness advantage. At this point the intermediary versions in the isolated community will quite quickly be eliminated by competition. These two stages are the only essential stages to answer your question. There is nothing ad hoc about them.
Stage (3).
Optionally (!) sometime later the barriers may be removed. If the communities can interbreed we have two significantly different races of one species. If they can no longer interbreed we have speciation.
If the barriers are never removed then we can only guess whether they are same species or different races. Perhaps the difference in the phenotype will be so large it is obvious – maybe not.
gpuccio, I’ve just heard that Dr. Behe will be responding to BioLogos in a few days.
Mark:
The problems:
1) A small population has small probabilities to find useful mutations. The samller the population, the smaller the probability.
2) You say:
At some stage the isolated population after extensive divergence which gives little fitness advantage stumbles upon a mutation which gives significant fitness advantage. At this point the intermediary versions in the isolated community will quite quickly be eliminated by competition.
But for the mechanism to work, the “divergence which gives little fitness advantage” should instead be a series of related intermediaries each of which expand to cover the whole population (or most of it).
The reason is simple. The population is small in the beginning. So letg’s say that the forst “favourable mutation”, which is also a step in the diorection of the final result, is in itself already difficult to achieve. If that first mutation does not expand at least to almost all the original “subpopulation”, there will be practically no chance that a second mutation in the samme direction can happen in the same subclone where the first mutation happened. Expansion is essential for that.
Remember that we have created our 20 intermediaries in the model exactly because the transition from A to B was too big. But each of the intermediate transitions nis not completely easy to obtain. I has supposed a deconstruction of a 50 AAs transition into 20 steps, just to stay broadly in the limits of “the edge of evolution” (two or three coordinated mutations per step).
Such a result woul be extremely difficult to attain even in a large, rapidly reproducing population. In a small isolated subpopulation it is reALLY a challenge. In a singular clone which does not expand, it is impossible.
That’s why each intermediate step must be favourable enough to expand and substitute the original subpopulation. And that must happen 20 times.
And I suppose that the final “b” population will than expand, and that a subset of it will be again segregrated to start the process again towards some new species “c”, through sone new 20 or 30 functional intermediates of which no trace will be left.
Can you see how absurd all this is?
Darwinists are forced to stick to unmeaningful vague concepts like “extensive divergence which gives little fitness advantage”. That is senseless. A random divergence which is not really functional will produce what we often observe: diversity of expression of the same functions, without any new functional information accruing. That diversity will go on co-existing in the same pool. There will be no expansion of a trait which can only be the basis for further accumulation of function.
If the steps don’t expand, the general probabilities remain the same, There is no sense, then in deconstructing a transition into a series of intermediates. You still have to explain how a probabilistically impossible transition happens thousands of time, without any reasonable path to it, but only by sheer luck.
BA:
I’ll look at that paper later, and answer.
BA:
Unfortunately, the paper is not free, and the abstract really says too little. I cannot comment on that basis.
I have not commented about the arguments on protein binding sites in the biologos piece for two reasons:
a) I don’t remember exactly what Behe says, and I could create confusion.
b) In general, I find the argument irrelevant and boring. Protein binding sites can certainly be more or less complex, but most of them are certainly more complex than two AAs. So, the two AAs threshold determined empirically by Behe is still an unsurmountable barrier for most of them.
Moreover, you well know that for me the true “impossible problem” for darwinism is the emergence of protein superfamilies and basic domains, which require usually at least 200 – 300 bits of complex information in most simplest cases. Why should I care about simple protein binding sites, when there is so much more to explain?
gpuccio, I fully agree with you that neo-Darwinism has far more severe problems than quibbling about a few binding sites, but none-the-less I look forward to Dr. Behe’s response. It seems every time they try to refute the severe limit he has established for evolution that they end up with egg on their face and the limit grows even more severe than he had originally laid out in his book.
notes:
“The Edge of Evolution: The Search for the Limits of Darwinism”
http://www.amazon.com/Edge-Evo.....0743296206
The Edge Of Evolution – Michael Behe – Video Lecture
http://www.c-spanvideo.org/program/199326-1
An Atheist Interviews Michael Behe About “The Edge Of Evolution” – video
http://www.in.com/videos/watch.....34623.html
#35 Gpuccio
I am sorry but you are making a classic mistake. The mutations in the population are not mutating towards anything. They don’t need to be established widely in the population (although the mathematics of genetic drift does allow for neutral mutations to become widely established). There will be a number of strands of cumulative mutations with little impact on the phenotype going on – most of which lead nowhere. Then one will happen to stumble upon a fitness advantage. There could have been many other potential destinations that provided a fitness advantage – that population just happened to stumble upon that one.
Mark:
I am sorry, but it’s you who are making a mistake, maybe non classic, but a mistake just the same.
I have never said that mutations in the darwinian model are towards something. But you must remember the reasoning we followed to understand my point.
I has suggested an example of a darwinian explanatory model which could use the concept of RV and NS to make a transition of 50 aminoacids possible through 20 functional intermediates which represent steps towards the final transition.
All the model is nased on that assumption. But it is not an “a priori assumption” (mutations did not happen towards the final result), but rather an “a posteriori” assumption (for the explanatory model to explain anything, it must have happened that mutations brought to each of the intermediate steps, otherwise we are again with a 50 aminoacid impossible variation to be explained).
If you read carefully all my reasonings in the previous posts, you will see that there is no other assumption than that.
So, I maintain that my reasoning in this example is completely darwinian and orthodox.
You say:
“They don’t need to be established widely in the population (although the mathematics of genetic drift does allow for neutral mutations to become widely established).”
But it is you who miss the real point of darwinism. The expansion of a mutation is exactly what positive NS does. In brief:
a) negative NS eliminates negative mutations. In this way, it “fixes” functional mutations, either existing or new. But the functional mutation, although fixed, remains limited to the descendants of the individual where the mutation happened. In this way, the probability of any further useful mutation in that clone has to be multiplied to the probability of the first useful mutation: IOWs, we are still in the field of pure randomness (except for the “fixing” effect).
To be more clear, if the original population is of 10^15 bacteria, and a first useful mutation (with a probability of 10^-15) happens in one of them, the probability that a second similar mutation may happen in the general population is still of 10^-15, while the probability that the same mutation may happen in the single clone with the first mutation is of 10^-30. That’s because, if the clone does not expand, its probabilistic resources (number of reproductions) are 10^15 times lower than those of the general population. That’s why expansion of each selected tract is so important to darwinian model: it offers the way to “tame” the probabilistic barriers.
b) positive NS is supposed to expand the positive tract. That is the real trick. You cannot do anything in a darwinian model, without that expansion. All darwinian examples, and reasonings, about microevolutionary models (antibiotic resistance), selective pressures, fitness functions, competitions, and so on, IOWs all the mythology of darwinism, is based on the concept of expansion of the fittest at the expenses of the less fit.
So, to sum up: in a darwinian model, NS can only eliminate negative mutations and “protect” functional information from degeneration (“fix” it). That is completely insufficient to solve the problem of the probabilistic barriers.
Only positive NS, which in theory can expand a functional mutation in the population at the expenses of the old population, can give the probabilistic resources to bring the model at least in the wide range of credibility.
And that expansion has to happen each time a functional step is found, otherwise that functional step would behave exactly like a neutral mutation, and there would be no real gain in the model.
That’s exactly the reason why neutralists have had to use the concept of genetic drift to “mend” their theory of the absence of an expansion mechanism.
But the “cure” is worse than the “disease”. It is true (perhaps) that genetic drift can expand neutral mutations, but it certainly expands them randomly, and confers no advantage in the model as to probabilities of a certain outcome.
Instead, the darwinian model can work in theory. What a pity that, in order to work, a very simple premise should be true which instead is false:
All complex functions should be deconstructable into a series of simpler, functional, selectable, fixable and expandable steps
All my reasoning here was hypothetically “accepting” such a premise for a hypothetical “real case”. See my premise at post 20, which is the basis for my whole discussion here:
“Therefore, our darwinian friends have to deconstruct the transition into intermediate functional steps, just to be credible, so that NS cam enter the scenario.
Let’s pretend they succeed (they never have, but just for discussion…).”
So, the purpose of my discussion is:
a) If it were true that complex functions are deconstructable into a seiries of functional intermediates (which would allow the darwinian model to work).
b) Then, why can’t we find any trace of those molecular functional intermediates in the present proteome? If it is true that each of those intermediates had to be selected, fixed and expanded if the model is to work?
So, please, re-read all my arguments here in the light of these clarifications (if you like, and if you have the time 🙂 ).
BA and gp:
I have access to Science magazine. I’ve read the paper. But, I have very little background for assessing it. However, I can make a few points:
(1) They are dealing with “permanent” binding sites; not transitional binding sites which are involved in metabolism. [“In a cell, permanently associated proteins guarantee mechanical integrity, wheras transient associations are indispensable for metabolism and the regulation thereof.]
(2) We’re basically dealing with structural proteins, proteins involving repetitive structures which the authors call ‘symmetries’.
(3) Because of the symmetric nature of these proteins, the effect of any ONE mutation in the binding site is therefore ‘multiplied’; hence, a ‘single’ or ‘double’ mutation is sufficient. However, when the symmetries are high enough—-NO mutations are necessary (which I’m not clear on why this is so)
(4) The introduced mutations bring about binding, however, the binding sites have their problems since the globular protein is affected (negatively)
(5) The mutations selected were ‘designed’ based on certain rules they had detected in the protein polymers that normally form.
Here are some quotes that come from the end of the paper:
“Symmetry is an important factor in protein association because it enhances the multiplicity of a single point mutation . . . . The highly symmetric Rua octamers with a contact mulitplicity of 8 form complexes after merely one or two mutations, and the 16-fold contact of Myp-A required no designed mutations at all. On the other hand, high multiplicity is hazardous for evolving organisms tha tneed to avoid detrimental mutations. [My emphasis]. . . . To what extent did our constructs follow the design? . . . The available high resolution of Uro-A revealed that the novel contact deviated slightly from the design but was so strong that it caused the opneing of a large, surprisingly weak interface within the protein partners. . . . Because this Rua-B complex can be explained by the influence of teh side-chain mobility, the mobility is probably an important factor in contact design. Our experiments demonstrate that the production of aparticular contact is quite feasible, whereas high precision seems difficult to achieve.“[My emphasis]
Very typically, in critiquing Behe, Darwinists want to use these structural type proteins as an argument to defeat his claims. However, when it comes to ‘life’, it is the truly ‘life-giving’ functions that matter; not so much the structural. We know that lipids in water form membrane-like structures (only in the most basic of ways); this is due to strictly chemical interactions. [Notice that the multiplicity which allows a single mutation to have a larger effect also introduces a grave danger.] It seems to me that when it comes to determining the power of NS to bring about positive changes that these structural kinds of interactions are only trivially important. It would seem our student author picked up on this study, just as another student author picked up on the vpu protein in HIV—if you remember that incident.
I wonder if the Darwinists like to use students to carry their water: it eliminates the need to put their reputation on the line in any confrontation with Behe, and, additionally, should the student prevail then it would be an instance where some ‘wet behind the ears’ student knew more than one of the key propenents of ID. They can’t resist this. But it strikes me as rather cowardly.
Well PaV on the bright side, hopefully when Dr. Behe is done correcting the students papers, they will learn first hand not to trust what their BioLogos teachers are telling them as to the ‘unlimited edge of evolution’:
OT:
I found this gem from Stephen Meyer that I had somehow missed earlier this year:
Higher Level Software Design In Cells – Stephen Meyer
http://www.metacafe.com/w/5495397
PaV you state:
‘just as another student author picked up on the vpu protein in HIV—if you remember that incident.’
Who could ever forget ‘that incident’, or the ‘lady’ behind ‘that incident’?
I believe she could have made a sailor blush 🙂
PaV:
Thank you for the information.
I agree with you that the paper is absolutely irrelevant to the subject which was being discussed.
Gpuccio #40
OK – I have some time now. On rereading, I realise we got sidetracked into answering a different question from your original one – which you repeat in #40.
Then, why can’t we find any trace of those molecular functional intermediates in the present proteome? If it is true that each of those intermediates had to be selected, fixed and expanded if the model is to work?
I believe that in many cases there are functional intermediates between two proteins – right? However, I am sure there are also many cases where there are no such intermediates. I imagine there are several reasons why this might be the case. I offered just two. The first was that single mutation could actually cause a very large number of changes in a protein i.e just because two proteins differed in a lot of amino acids it does not mean there were actually a lot of intermediates. You did not address this. But let us turn to the second reason.
In essence I am saying that if there was a transition from protein A to protein B via X, Y, Z etc and protein B had very significant fitness advantages over A, X, Y, Z … etc then you would expect X, Y, Z etc to be eliminated by natural selection. The interesting question is why A still exists. The answer is that the transition from A to B happened in an isolated population. So the population with A was not exposed to competition from B. If at some stage the populations are no longer isolated the population with protein A will also have moved on – but in a different direction quite possibly adapted to a different niche – so two populations with quite different proteins and no intermediates live side by side.
A second question is what is the probability in an isolated population of getting from A to B?And, closely linked, – does this require that each intermediary become fixed in all or most of the population? This is complicated. I believe it is the subject of population genetics and I am not an expert in this area – but it will be fun to make a inexpert stab at it in another comment.
Mark:
I addressed your “single mutation – great variation” argument in my post #29. I paste here the relevant part for your convenience:
“Let’s go to the possibility that one single variation event (inversion, frameshift mutation, or similar) may change completely one sequence into a different one. That is certainly possible. Frameshift mutations are perhaps the best example.
In a frameshift mutation, a single aminoacid variation is the cause of a completely different reading of the existing information. The whole sequence changes.
And so? From a probabilistic point of view, nothing changes. This is a simple point that darwinists have difficulties to understand.
Let’s be more clear. You start from protein A and the final result is protein B. Both are functional, but they belong to different superfamilies, and are therefore unrelated at primary sequence level.
That means that, to get B, most of the aminoacids in A must change. Let’s say that at least 50 aminoacids have to change exactly, which means a transition of 216 bits.
As A is unrelated to B, any change can be considered as a random attempt at finding B. IOWs, the transition from A to B is a random walk.
There is no significant difference if you change one AA at a time, or if you just “try” a completely different sequence. The probabilities of success remain practically non existent.
The proff of that is that no frameshift mutation is known which has generated a new functional protein.
As you may know, darwinists have blindly believed for years that such an example existed: Ohno’s theory about the emergence of nylonase. But obviously, serious research has clearly shown that such a theory was wrong.
So, a sudden transition has no possibility to find a new isolated island of function, because it does not allow even the theoretical possibility that functional intermediates may make the transition possible.”
Proteins belonging to different superfamilies have no known intermediates. Protein belonging to the same superfamily or family do show continuity in natural history: they often keep the same function with differences in primary structure (probably the effect os neutral mutations), or may vary their function through small variations at the active site level (while the general folding and basic biochemical activity remain the same).
Your suggested model is possible, but IMO it is definitely “ad hoc” and not credible. It implies many unlikely assumptions:
a) That each time that a new superfamily arises, a subpopulation is constantly isolated, goes through the necessary functional intermediates, and finds the new protein structure in complete isolation from the original population, without any survival of the intermediate functional molecules.
b) That this happened not once or a few times, out of some contingency, but always, as though it were a laws of nature. Indeed, all protein suprefamilies are isolated, and no intermediates are known for all of them. That should mean something, or not?
c) That those isolate small population retained sufficient probabilistic resources to effect the transition through all the necessary steps, even with its reduced number of replicating individulas.
d) That the expansion of each intermediate, or of the final intermediate, was complete, erasing any trace of the process. This is particularly unresonable. It’s not what we observe in nature. Even in humans, a lot of strongly negative mutaions do survive: human mendelian diseases are a good example of that. And many of them are even dominant, and not recessive. In sexual reproduction, alleles usually survive even if “negative”, and freely circulate in the population.
So, if even negative, and certainly neutral variants of a functional protein do survive, why should all those very functional intermediates of the transition, which were selected and expanded for their intrinsic functionality, be completely erased from the genetic memory? And this every single time?
While, as we know, the original starting protein, the least functional of all, does survive?
IOWs, this is a very, very bad model to explain what we observe in protein superfamilies.
A good model, a credible and parsimonious one, is to affirm that each new superfamily arises independently of the existing ones, or through a rather sudden, designed treansition, which does not require the intermediates, ar at least does not require their expansion in natural populations. That explains well why we find the individual superfamilies, and no trace of intermediates.
#46
Gpuccio
I am sorry. You are quite right you did attempt to address the “frameshift” argument. Although I think the way you address it fails.
(1) Frameshift is by no means the only way that a protein can undergo a large transformation in one mutation. All the of the “chromosomal” mutations fall into this category – insertions, reversals etc.
(2) You seem to be reverting to arguing about the probability of getting from A to B rather than why there are no intermediaries (i.e. the chances that a large change mutation will result in a fitness benefit). I think the large change/one mutation addresses the lack of intermedaries does it not? Let us stick to one issue at a time, and talk about the chances of a mutation being useful separately.
Reverting to the second model for no intermediaries.
I stressed
(a) that this is just one way in which intermediaries might be removed – so not “each time” just sometimes and not as some kind of “law of nature” – just one thing that happens sometimes.
(b) I believe there exist a good number of cases in which some or all the intermediaries exist! (In fact wouldn’t a superfamily be a case where pretty much all the intermediaries exist?). So it is not the case that “the expansion (sic) of each intermediate, or of the final intermediate, was complete, erasing any trace of the process.” on every occasion. It just happens sometimes.
At the end you write.
A good model, a credible and parsimonious one, is to affirm that each new superfamily arises independently of the existing ones, or through a rather sudden, designed treansition,
Well of course – a designed transition can explain anything – unless of course you are prepared to discuss how it is implemented and by whom – in which case it gets a little harder.
Mark:
Maybe we understand a little better now.
Just a few further remarks:
a) I have discussed frameshift mutations because that was the only mechanism of “sudden big transition” for which (as far as I am aware) at least one explicit model (although wrong) has been suggested (nylonase). But any other mechanism (inversion, deletion, traslocation) can be treated in the same way.
The problem is: such random events can certainly change a lot of aminoacids at a time. In the same way, random neutral mutations can change a lot of aminoacids sequencially, without any need of functional intermediaries expanding in the population. In the end, there is no difference. Thet are random mechanisms anuway.
All these purely random mechanisms can certainly, in principle, determine a final functional transition of, say, 50 aminoacids.
But the problem is that, being completely random, they must be evaluated by a probabilistic model.
IOWs, by insisting for a model without functional intermediaries which expand for positive selection, you are practically dropping the darwinian theory in favour of a purely random model, which is a way to renounce to any explicatory power: a random model will not do, as we in ID know very well.
It’s not a case thyat darwinists constantly appeal to the necessity part of their model, NS, to defend themselves from the attacks of ID.
But the intervention of positive NS is possible only if the deconstruction into functional intermediaries can be achieved (and it can’t); and anyway, it raises the problem of the fate of the intermediaries.
So, it’s not me who want the intermediaries at all costs. It’s the darwinian model which requires them!
b) About the erasure of the intermediaries being “just one thing that happens sometimes”.
My model is about the emergence of superfamilies. I am explicit and detailed about that.
We have 2000 superfamilies, not one.
Each of them emerged at some point of natural history.
For none of them are intermediaries known.
Is that your idea of “sometimes”?
In a superfamily, it’s not so much that “intermediaries exist”. It’s just that the basic function in maintained throughout the whole natural history of the superfamily.
Myoglobins remain myoglobins, both in the mosquito and in humans. Waht we observe is a continuity in the homology and differences at the primary structure level, which I have myself brought here as a good argument for common descent. But no new function is “discovered”.
In other cases, like in the recent article about nuclear receptors, we can find in a single suprefamily a variety of different sub-functions (in the context of the basic function which remains the same) which can be explained by slight variations at the active site level. These variety of subfunctions is usually in the range of microevolutionary transitions, so it can be potentially explained by a microevolutionary model. However, I don’t believe that true intermediaries are knwon even here, but indeed, being the transitions small, they are not really necessary in this context.
For protein families which are strongly different at primary level, even in the context of a same superfamily, a detailed analysis should be carried on in each case.
But the fundamental question remains: how was the primary level of protein functional information (the 2000 superfamilies) generated?
c) You say:
“unless of course you are prepared to discuss how it is implemented and by whom – in which case it gets a little harder.”
I am certainly prepared to discussed that. So should be science. And hard does not mean impossible. It can be done.
And, however hard it is, we can at least hope to find scientific explanations that way.
But not certainly sticking to models which don’t explain anything and which are falsified by facts.
gpuccio, do you have a good reference for protein superfamilies? I looked around a bit but seem to be getting numbers all over the place, from ‘several thousand’ to over 60,000.
BA:
I always refer to the data in the SCOP (Structural Classification of Proteins) site, here:
http://scop.mrc-lmb.cam.ac.uk/scop/
The SCOP classification is based on 38221 PDB (Protein Database) entries, for a total of 110800 Domains, and is at present updated at June 2009. Practically, it is based on all we know of the protein structures in the existing proteome.
If you click on “Folds, superfamilies, and families statistics here. “, at the top of the home page, you will get a table (the most recent), which gives the total number of folds, superfamilies and families (and the number of them in each of the basic 7 biochemical types).
The “folds” grouping includes at present 1195 entities, the “superfamilies” 1962, the “families” 3902.
To get an idea of how that relates to primary sequence, you can go from the home page to another tool, by clicking in “Access methods” on “SCOP domain sequences and pdb-style coordinate files (ASTRAL)”.
In the ASTRAL page, click on “SCOP 1.75 Sequence Resources”. You will get a tool which allows you to download a list of identifiers of genetic domain sequence subsets (IOWs, of basic domain types), grouped according to criteria you can specify.
If you use the “percent identity” criterion, and set it at the smallest value (less than 10% identity), you get a list of identifiers. If you count them (you can easily do that by copying them and pasting them in Excel), you will see that they are 6258.
If you do the same with the E-value criterion (which is more or less the probability that two sequences are unrelated) and you set it at the highest value of 10 (which means no significant similarity at all), you get 6041 values, which is more or less the same result.
These are the broadest groupings you can have. The ASTRAL results are based on primary sequence, while the concepts of fold and superfamily are related to the 3D structure. As you can see, they are even more restrictive (there are proteins which have completely different primary sequence and similar 3D structures).
If you use less restrictive criteria in ASTRAL, you can get higher numbers: for instance, if you set the percent identity to less than 20%, you get 7002 results.
But the concept is very clear. However we group the proteome, we have at present at least 1000 different fundamental folds, 2000 “a little less fundamentally different” folds (the superfamilies), and 6000 totally unrelated groups of primary sequences. I generally use the superfamily number, because I believe it is the most representative.
These are the basic “functional islands”. That does not mean that in the context of a superfamily all is easy to explain for darwinists (not at all!). But there are priorities. The explanation of how the basic functional units came into existence remains, IMO, the main problem, at least at this basic “single protein” level of analysis.
That’s why Axe, for instance, is focused on that aspect too.
Whoa GP. Thanks.
markf:
In earlier posts you seemed to be relying on various functional intermediaries that just happen to be hanging around—and, hence, not requiring time to appear.
Yet, I think of Behe’s work with the malarial parasite. Here is a premier replicator. And, if neutral drift is going to make these handy “intermediaries” available, it would be a great replicator. However, in TEoE (The Edge of Evolution) the malarial parasite took an estimated 10^20 replications to come up with the simply 2 amino acid solution that Chloroquine resistance required. My question then is: where were these handy “intermediates”? Remember, Behe dealt with empirical evidence. These are raw facts.
Gpuccio
I lack the expertise to continue this discussion. But it appears that Zachriel, who is one of the many with expertise banned from UD, does. He has made relevant comments on antievolution.org
here
and
here
I can only take a philosophers look at it. If there are no intermediaries between two proteins there are two possibilities:
(1) They never existed
(2) They existed but were removed
My explanation for (1) is that there was a large change mutation. You say that the probability of such a mutation being viable is too small. Well that is a long discussion beyond my skill. Remember a translocation moves a viable piece of DNA which may well have been coding for usable proteins elsewhere. Also the resulting mutated DNA does not immediately have to be viable. It may not be expressed and can itself be subject to random variation, recombination etc. The true estimate of the probability of getting a usable result strikes me as a very hard sum even to an order of magnitude.
My explanation for (2) is that the intermediaries were eliminated by natural selection. You consider that to be an ad hoc explanation but of course this is only an extension of the theory of allopatric speciation which has considerable theoretical and empirical support.
Meanwhile what could be more ad hoc than an unknown designer of unknown powers and motives made it that way?
Mark:
thank you for pointing to Zachriel’s comment about my posts. As you know, I really appreciate his contributions, which are always intelligent and respectful. I am really sorry that he can’t post here. Unfortunately, I don’t usually read in the place where he writes, because it is not at all an useful experience (except for a deeper understanding of the worst trfaits in human nature). Not Zachrile’s fault, obviously.
I have read the two posts you link. They say correct things, but irrelevant.
The first post is merely restating what I have stated many times: that RV + effective selection of each step works. I have just posted about that in another thread, restating the concept. My words are almost exactly like Zachriel’s. So, I can see no disagreement here.
In the second post Zachriel, maybe forgetting that he himself has used the wrong example of nylonase to support his claims about frameshift mutations, misses a good occasion to remain silent, and is forced to use a couple of theoretical model papers which really demonstrate nothing. Zachriel, I am waiting for a new “nylonase like” example. Maybe you will be luckier next time. In the meantime, be more cautious about what papers you boldly present as “evidence”. 🙂
His remark about possible translocation of motifs is obviously correct, but irrelevant to the origin of superfamilies (exon shuffling can certainly be a mechanism for multi-domain proteins).
His statement that “random sequences can result in new proteins, including many of the stable folds found as the basis of protein superfamilies” is completely gratuitous and unsubstantiated.
His admission/statement that “there may be more than a single ancestor for proteins families, but nothing suggesting design” is simply funny.
Anyway, thanks Zachriel for the interesting contribution.
Mark, going back to you, I accept your expanations for what (IMO) they are: honest, but completely unconvincing.
#52 Pav
Yet, I think of Behe’s work with the malarial parasite. Here is a premier replicator. And, if neutral drift is going to make these handy “intermediaries” available, it would be a great replicator. However, in TEoE (The Edge of Evolution) the malarial parasite took an estimated 10^20 replications to come up with the simply 2 amino acid solution that Chloroquine resistance required. My question then is: where were these handy “intermediates”?
I am no expert on the mutations that lead to chloroquine resistance but a brief skim of the literature suggests that
(1) Different populations of mosquitoes gained their resistance through different mutations
(2) We don’t know precisely what mutations were necessary in any of these cases
(3) No one has investigated to see whether intermediaries do exist.
#54 Gpuccio
Of course Zachriel has no opportunity to respond to your comment here.
I am in a rush right now but will try to create and opportunity for discussion on my own blog later today.
On nylonase. It is interesting that evolutionary theory allows for specific hypotheses about how a gene came about – in this case disproved. Compare that to ID! There does appear to a lot of literature about frameshift mutations and their ability to create novel proteins. I am not expert enough to evaluate them but isn’t the sdic gene another example? At least the paper is a detailed hypothesis about how a novel gene was created.
As far as I know, In the course of its 20 years the ID community has not made a single proposal as to how a gene was created.
gpuccio, thanks very much for the link. If you don’t mind I will also use your explanation for how they are grouped since it is very clear from your generous groupings as to exactly how much leeway is granted to Darwinists so as to give ANY plausible explanation, much less adequate explanation, for the origination of the many fundamentally different proteins that operate in tightly integrated fashion.
gpuccio, I think you may find this link interesting:
A New Guide to Exploring the Protein Universe
“It is estimated, based on the total number of known life forms on Earth, that there are some 50 billion different types of proteins in existence today, and it is possible that the protein universe could hold many trillions more.”
Lynn Yarris – 2005
http://www.lbl.gov/Science-Art.....verse.html
gpuccio,
I would find a conversation between you and Zachriel to be very interesting. While I suspect you would get a warmer welcome than you think at The Panda’s Thumb, if that’s not your cup of tea, how about inviting Zachriel to a mutually agreeable venue. Perhaps Dr. Hunter would be willing to open a thread on his blog for the two of you.
MathGrrl:
I already had an interesting and rich discussion with Zachriel in the past on Mark’s blog. That can certainly happen again. I am absolutely available, and I would really appreciate a direct confrontation with him, at least as much as is allowed by time and resources.
And no, I will not post at PT or similar. First of all I really don’t appreciate some attitudes very common there, especially the scientistic arrogance.
But anyway, there is another simple reason why I have to post mainly here. I am one, and I can try to detail my positions here because I don’t usually have (but sometimes it happens here too) to face the various objections of tens of people, some of them sincerely interested in a constructive discussion, some not.
It is a fact that ID is a minority position, and that respect for it as a scientific theory is not common in the other field. This is the reality we have to deal with, and I can deal with it in a better way here.
But sometimes I appreciate a confrontation in a neutral enough territory. Recently I have debated for some time on Niel’s blog, but even there, with only three interlocutors, the huge variety of the objections and the scarce “cooperation” of at least one of the interlocutors forced me to go away at some point.
Mark:
There does appear to a lot of literature about frameshift mutations and their ability to create novel proteins. I am not expert enough to evaluate them but isn’t the sdic gene another example?
No. It is a completely different situation, mainly a chimera of two existing genes by duplication and retrotransposition and some minor mutations.
No frameshift involved, as far as I can see.
Mark:
the same techniques used by darwinists (comparisons of the gemomes and proteomes) can be used to try to understand when and how design inputs have happened.
It will come. We just need a scientific community which, at least in part, takes seriously the design hypothesis.
markf:
Certainly (1) is true. Certainly (2) is true. But doesn’t (2) simply imply that we, like NS, don’t know what the ‘target’ is when it comes to the development of resistance? Which then would mean, doesn’t it, that (3) is impossible to determine? IOW, how can you know what an intermediate looks like if you don’t know what the ultimate goal looks like. Isn’t this a real limitation of Darwinian theory? And doesn’t it suggest that if we talk about intermediates that we roam in a very speculative area?
But, for our purposes here, I would point out these three things: (1) Behe and Snoke set up a theoretical model that included pseudogenes and gene duplication, with the result that within reasonable amounts of replications only TWO a.a. substitutions were available to Darwinian search; (2) Sir Fred Hoyle established via his own mathematical model (one that at various points reached the same standard values as popl gentcs) arrived at the conclusion that NS can only view mutations up to TWO evolutionary steps in either direction; and (3) Behe, in EoE, reaches the same conclusion, but based on empirical evidence, that it is extremely difficult for NS to take two steps involving a two a.a. substituion for each step.
It seems to me that from three different angles, we get the same result: NS is limited to two a.a. steps in any direction. I would presume then that within the genome we would have a plethora of ‘intermediates’ that are ‘one’ a.a. shifted, and probably not so many that are ‘two’ a.a. shifted. The implication is that if a protein gets shifted by over two a.a.’s, then NS eliminates this life form. Thus, taking a random walk for more than a few a.a.’s seems very highly improbable. This is the dilmena here, is it not? Why? Exactly because there doesn’t seem to be that many available intermediates, and positive selection (=directional selection) will take way too much time to arrive at any major change all by itself.
PaV, could you reference the Hoyle calculation for me please.
OK – I created an entry on my blog that I hope will be a neutral and friendly venue for this discussion based on mutual respect.
Gpuccion – the paper on the sdic gene I linked to says:
It is a chimeric gene formed by duplication of two other genes followed by multiple deletions and other sequence rearrangements
Are deletions not examples of frameshifts?
Mark:
Thank you for your entry, I will go there soon.
Regarding sdic, the situation is complex. If you look at figure 3 in the paper, you will see that the main sequence in all 4 sdic genes is the same as cdic. It is true, however, that sdic1, the one which seems to be expressed, is truncated in the final part, because it is different in that part from cdic, in part as a consequence of frameshift mutations (but not only).
The problem is that the protein coded by sdic is not really known, nor is its structure and function known.
So, any hypothesis that a frameshift mutation has created a new functional sequence here is completely out of order. Apparently, the only effect of the mutation seems to be a contribution to the loss of part of the functional molecule.
The nylonase model was completely different. Ohno, while wrong, had the courage to make a strong hypothesis: he started form an existing, functional protein, nylonase, and hypothesized in molecular detail that it had originated from an existing protein coding gene through a specific mutation which had created a new ORF (frameshifted).
Such a detailed model has a great advantage: it can easily be falsified. And yet, we needed decades to falsify it.
But it was not difficult: I have personally extracted the sequence of the supposed precursor protein from Ohno’s paper, and blasted it. The result: no homology with any existing protein.
IOWs, the supposed precursor protein has never existed. Nylonase, instead, has very strong homology with penicillinase, from which it derives.
The truth, as always, is simple. We must just look for it in the right place.
#62 Gpuccio
There is a danger of conducting the same debate in two places but I wan’t to pick up one point here.
You write:
the same techniques used by darwinists (comparisons of the gemomes and proteomes) can be used to try to understand when and how design inputs have happened.
It will come. We just need a scientific community which, at least in part, takes seriously the design hypothesis.
Can you give me an example of how this would work? And why has no one even attempted it? Surely that is the way for a scientific community to the hypothesis seriously?
BA:
Hoyle, “The Mathematics of Evolution”. He takes a ‘path-integral’ approach to the Darwinian model, and, based on the distribution of genomic variants in the population—this is my way of categorizing his maths—at most one can expect evolution to move either two steps forward or backward from the current distribution. It’s been well over a year since I was beefing up on all of this, so I would have to go back and pull out the math and the quotes that go with it. From memory, I would say that he reaches these conclusions in Chapters 3 and 4.
Thanks PaV, I would like to get all this two step stuff in one place, as well remember that Seelke has done work on the two step limit:
Response from Ralph Seelke to David Hillis Regarding Testimony on Bacterial Evolution Before Texas State Board of Education, January 21, 2009
Excerpt: He has done excellent work showing the capabilities of evolution when it can take one step at a time. I have used a different approach to show the difficulties that evolution encounters when it must take two steps at a time. So while similar, our work has important differences, and Dr. Bull’s research has not contradicted or refuted my own.
http://www.discovery.org/a/9951
Reductive Evolution Can Prevent Populations from Taking Simple Adaptive Paths to High Fitness – Ann K. Gauger, Stephanie Ebnet, Pamela F. Fahey, and Ralph Seelke – 2010
Excerpt: In experimental evolution, the best way to permit various evolutionary alternatives, and assess their relative likelihood, is to avoid conditions that rule them out. Our experiments, like others (e.g. [40]), used populations of cells growing slowly under limiting nutrient conditions, thereby allowing a number of paths to be taken to higher fitness. We engineered the cells to have a two-step adaptive path to high fitness, but they were not limited to that option. Cells could reduce expression of the non-functional trpAE49V,D60N allele in a variety of ways, or they could acquire a weakly functional tryptophan synthase subunit by a single site reversion to trpAD60N, bringing them within one step of full reversion (Figure 6). When all of these possibilities are left open by the experimental design, the populations consistently take paths that reduce expression of trpAE49V,D60N, making the path to new (restored) function virtually inaccessible. This demonstrates that the cost of expressing genes that provide weak new functions is a significant constraint on the emergence of new functions. In particular, populations with multiple adaptive paths open to them may be much less likely to take an adaptive path to high fitness if that path requires over-expression.
http://bio-complexity.org/ojs/.....O-C.2010.2
Markf you ask about the correct hypothesis:
The foundational rule for the diversity of all life on earth, of Genetic Entropy, which can draw its foundation in science from the twin pillars of the Second Law of Thermodynamics and from the Law of Conservation of Information (Dembski, Marks, Abel), can be stated something like this:
“All beneficial adaptations away from a parent species for a sub-species, which increase fitness to a particular environment, will always come at a loss of the optimal functional information that was originally created in the parent species genome.”
Markf the fossil record also shows loss of information:
In fact, the loss of morphological traits over time, for all organisms found in the fossil record, was/is so consistent that it was made into a ‘scientific law’:
Dollo’s law and the death and resurrection of genes:
Excerpt: “As the history of animal life was traced in the fossil record during the 19th century, it was observed that once an anatomical feature was lost in the course of evolution it never staged a return. This observation became canonized as Dollo’s law, after its propounder, and is taken as a general statement that evolution is irreversible.”
http://www.pnas.org/content/91.....l.pdf+html
A general rule of thumb for the ‘Deterioration/Genetic Entropy’ of Dollo’s Law as it applies to the fossil record is found here:
Dollo’s law and the death and resurrection of genes
ABSTRACT: Dollo’s law, the concept that evolution is not substantively reversible, implies that the degradation of genetic information is sufficiently fast that genes or developmental pathways released from selective pressure will rapidly become nonfunctional. Using empirical data to assess the rate of loss of coding information in genes for proteins with varying degrees of tolerance to mutational change, we show that, in fact, there is a significant probability over evolutionary time scales of 0.5-6 million years for successful reactivation of silenced genes or “lost” developmental programs. Conversely, the reactivation of long (>10 million years)-unexpressed genes and dormant developmental pathways is not possible unless function is maintained by other selective constraints;
http://www.pnas.org/content/91.....l.pdf+html
Dollo’s Law was further verified to the molecular level here:
Dollo’s law, the symmetry of time, and the edge of evolution – Michael Behe
Excerpt: We predict that future investigations, like ours, will support a molecular version of Dollo’s law:,,, Dr. Behe comments on the finding of the study, “The old, organismal, time-asymmetric Dollo’s law supposedly blocked off just the past to Darwinian processes, for arbitrary reasons. A Dollo’s law in the molecular sense of Bridgham et al (2009), however, is time-symmetric. A time-symmetric law will substantially block both the past and the future.
http://www.evolutionnews.org/2.....f_tim.html
This following tidbit of Genetic Entropy evidence came to me from Rude on the Uncommon Descent blog:
At one of the few petrified forests that sports ginkgo wood, I was told by the naturalist that ginkgos are old in the fossil record—they date from the Permian back when trees were first “invented”. She said that there are many species of fossilized Ginkgoaceae, but Ginkgo biloba, is the only living species left. – Rude – Uncommon Descent
The following site points out that there is a fairly constant, and unexplained, ‘background extinction rate’. My expectation for extinctions, at least for the majority of extinctions not brought about by catastrophes, is for the fairly constant rate of ‘background extinctions’ to be attributable directly to Genetic Entropy:
The Current Mass Extinction
Excerpt: The background level of extinction known from the fossil record is about one species per million species per year, or between 10 and 100 species per year (counting all organisms such as insects, bacteria, and fungi, not just the large vertebrates we are most familiar with). In contrast, estimates based on the rate at which the area of tropical forests is being reduced, and their large numbers of specialized species, are that we may now be losing 27,000 species per year to extinction from those habitats alone. The typical rate of extinction differs for different groups of organisms. Mammals, for instance, have an average species “lifespan” from origination to extinction of about 1 million years, although some species persist for as long as 10 million years.
http://www.pbs.org/wgbh/evolut.....32_04.html
Psalm 104: 29-30
You hide Your face, they are dismayed; You take away their spirit, they expire And return to their dust. You send forth Your Spirit, they are created; And You renew the face of the ground.
One persistent misrepresentation, that evolutionists continually portray of the fossil record, is that +99.9% of all species that have ever existed on earth are now extinct because of ‘necessary evolutionary transitions’. Yet the fact is that 40 to 80% of all current living species found on the earth are represented fairly deeply in the fossil record. In fact, some estimates put the number around 230,000 species living today, whereas, we only have about a quarter of a million different species collected in our museums. Moreover, Darwin predicts we should have millions of transitional fossil forms. These following videos, quotes, and articles clearly point this fact out:
The Fossil Record – The Myth Of +99.9% Extinct Species – Dr. Arthur Jones – video
http://www.metacafe.com/watch/4028115
“Stasis in the Fossil Record: 40-80% of living forms today are represented in the fossil record, despite being told in many text books that only about 0.1% are in this category. The rocks testify that no macro-evolutionary change has ever occurred. With the Cambrian Explosion complex fish, trilobites and other creatures appear suddenly without any precursors. Evidence of any transitional forms in the fossil record is highly contentious.”
Paul James-Griffiths via Dr. Arthur Jones
http://edinburghcreationgroup......paper1.php
The following studies show that the number of species that are currently alive is well below the ‘millions of species’ that are commonly believed to be alive:
Marine Species Census – Nov. 2009
Excerpt: The researchers have found about 5,600 new species on top of the 230,000 known. They hope to add several thousand more by October 2010, when the census will be done.
http://news.yahoo.com/s/ap/200.....ine_census
Scientists finish first sea census – October 2010
Excerpt: The raw numbers behind the $650 million Census of Marine Life are impressive enough: Almost 30 million observations by 2,700 scientists from more than 80 nations spent 9,000 days at sea, producing 2,600 academic papers and documenting 120,000 species for a freely available online database.
http://cosmiclog.msnbc.msn.com.....sea-census
Census of Marine Life Publishes Historic Roll Call of Species in 25 Key World Areas – August 2010
Excerpt: In October, the Census will release its latest estimate of all marine species known to science, including those still to be added to WoRMS and OBIS. This is likely to exceed 230,000. (Please note how far off the 230,000 estimated number for species to be found is from the actual 120,000 number for species that were actually found in the census)
http://www.sciencedaily.com/re.....173704.htm
etc… etc…
I dunno markf, the fossil evidence shows, and has always shown, sudden appearance, rapid diversity, with long term stability following, as well as loss of morphological variability over long terms, and yet, despite the fact that evolutionists have never shown a gain of functional information above what was already present in the parent species, they refuse to honestly report on the evidence, and continue to try to intimidate anyone who questions the neo-Darwinian paradigm. Mark you act as if the evidence is just reported on honestly and that ‘scientists’ will all of the sudden start to be fair if the correct model for classification is in place. If you truly do believe that, you are very naive in this matter for this ‘discrepancy of evidence’, that evolutionists have falsely subjected the public to, turns out to be very much a atheistic/materialistic religious dogma that is maintained to be true by the reigning priesthood of Darwinism, no matter how crushing the evidence is against neo-Darwinism.
#70 BA
“All beneficial adaptations away from a parent species for a sub-species, which increase fitness to a particular environment, will always come at a loss of the optimal functional information that was originally created in the parent species genome.”
Functional information is measured as a probability – agreed?
The probability of any outcome is relative to two things:
1) How that outcome is specified. For example, the probability of a dice coming down as an even number, a six, or a six angled in a certain way.
2) What evidence you have available? Do you know anything about the manufacture of the dice or the way it is to be thrown? Any past history of the dice or dice in general? etc.
Therefore the information in an outcome is dependent on
(a) how that outcome is specified
(b) what evidence you have available
So it is a nonsense to talk about the “information created in the parent genome”. Information relative to what specification and what evidence?
markf, the principle of Genetic Entropy lines up with ALL available evidence.
The real question you should be asking is what evidence, whatsoever, do you have that information has been generated, by purely material processes, above and beyond what was in the parent species in the first place? i.e. Have Darwinists passed the fitness test?
Is Antibiotic Resistance evidence for evolution? – ‘The Fitness Test’ – video
http://www.metacafe.com/watch/3995248
Have Darwinists falsified ID?
Michael Behe on Falsifying Intelligent Design – video
http://www.youtube.com/watch?v=N8jXXJN4o_A
Have Darwinists falsified Abel’s null hypothesis for the generation of functional prescriptive information by purely material processes?
The Capabilities of Chaos and Complexity: David L. Abel – Null Hypothesis For Information Generation – 2009
To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: “Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration.” A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis.
http://www.mdpi.com/1422-0067/10/1/247/pdf
Can We Falsify Any Of The Following Null Hypothesis (For Information Generation)
1) Mathematical Logic
2) Algorithmic Optimization
3) Cybernetic Programming
4) Computational Halting
5) Integrated Circuits
6) Organization (e.g. homeostatic optimization far from equilibrium)
7) Material Symbol Systems (e.g. genetics)
8) Any Goal Oriented bona fide system
9) Language
10) Formal function of any kind
11) Utilitarian work
http://mdpi.com/1422-0067/10/1/247/ag
Have Darwinists ever shown that prescriptive information increases with any mutation?
The GS (genetic selection) Principle – David L. Abel – 2009
Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function. No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.”
http://bioscience.bio-mirror.c.....6/3426.pdf
Have Darwinists ever shown that a sub-speciation was wrought by an increase in information and not by a loss in information?
EXPELLED – Natural Selection And Genetic Mutations – video
http://www.metacafe.com/watch/4036840
“…but Natural Selection reduces genetic information and we know this from all the Genetic Population studies that we have…”
Maciej Marian Giertych – Population Geneticist – member of the European Parliament – EXPELLED
Have Darwinists even justified using Natural Selection as to explaining the diversity of all the life on earth?
This following study is very interesting for the researcher surveyed 130 DNA-based evolutionary trees to see if the results matched what ‘natural selection’ predicted for speciation and found:
Accidental origins: Where species come from – March 2010
Excerpt: If speciation results from natural selection via many small changes, you would expect the branch lengths to fit a bell-shaped curve.,,, Instead, Pagel’s team found that in 78 per cent of the trees, the best fit for the branch length distribution was another familiar curve, known as the exponential distribution. Like the bell curve, the exponential has a straightforward explanation – but it is a disquieting one for evolutionary biologists. The exponential is the pattern you get when you are waiting for some single, infrequent event to happen.,,,To Pagel, the implications for speciation are clear: “It isn’t the accumulation of events that causes a speciation, it’s single, rare events falling out of the sky, so to speak.”
http://www.newscientist.com/ar.....tml?page=2
etc.. etc.. etc..
The point being markf is that you have no scientific basis in the first place as to dictate to me what you think should be a proper measure or not for ascertaining Genetic Entropy i.e. information, since you actually have no firm hypothesis to work from in the first place in which to counter me!!! All you can hope to do to protect your delusions of scientific integrity within neo-Darwinism is to obfuscate with smoke and mirrors of rhetoric just how hopelessly pathetic the case is for neo-Darwinism!!!
BA77
“The point being markf is that you have no scientific basis in the first place as to dictate to me what you think should be a proper measure or not for ascertaining Genetic Entropy i.e. information”
I am using the measure of information that the ID community provides! Look at the glossary or any of Dembski’s publications if you don’t believe me. They all define the measure of information as probability. Even the papers you refer to in your comments such as Dembski’s on the law of conservation of information measure it as a probability.
All I am asking you is when you talk of the information in a genome
(a) what specification are you using?
(b) what is the knowledge on which the probability is based?
I am not dictating anything. I am just asking you a question.
markf, pass any of the tests I listed and then will talk.
we’ll
Mark:
Can you give me an example of how this would work? And why has no one even attempted it? Surely that is the way for a scientific community to the hypothesis seriously?
There are many possible lines of investigation.
I believe that most data will come from a “higher resolution” understanding of molecular natural history, as we go on sequencing genomes and analyzing proteomes.
We are just at the beginning. And the data we already have are interpeted one way only, starting from the false assumptions of darwinian model.
The way we can make design hypotheses could be the following:
We must look for the first emergence of new functional domains in the proteome, and try to restrict as much as possible the chronological windows for their emergence.
we must refine our “tree of life” and our “molecular clocks”. It is fundamental that we may reconstruct natural history as precisely and objectively as possible.
Each time we witness the emergence of new complex function in a short window of time, for instance as the emergence of a new species, with new proteins, unrelated to the ancestor species, we have to postulate a design input.
The analysis of the design input must be based on the basic biochemical level, and can then proceed to more general levels (regulation, complex systems).
If a new species gets new proteins and new networks and new regulations, we have to try to connect those features to understand the higher functional purpose of all of them. That can lead us to a better understanding of the design strategy we are observing.
Anothert iimportant level of enquiry is that examplified in the recent work about mutations in ribosome proteins. We must have definite experimental data on how mutation work in a random system, how many of them are negative, neutyral or positive as regards fitness in an objective model. To study design, it is essential to be able to define correctly the role of RV and of NS, if any.
You ask: “Why has no one even attempted it?”
First of all, it’s not true. The few biological researchers we have in our field (Behe, Axe, Durston) have given great contributions. If it were not for them, many important issues would still be completely obscure.
But many non ID researchers are contributing greatly to this lines of enquiry. They are gathering facts. Some of them are well aware of the problem of functional complexity, and are trying to elucidate it better (in the hope that a darwinist compatible explanation may be found, probably, but it’s fine just the same).
Whatever darwinist propaganda may say, the problem of the origin of functional information is still completely unresolved, and it is crucial to our understanding of the living world. While scientistic philosophers try to deny that functional information exists, or that DNA is a code, or that proteins are highly isolated and unlikely islands of functionality, or that a random system must obey the laws or porbability, or that consciousness exists, or whatever other undeniable fact or concept they feel like denying from time to time, serious researchers are well aware of the problem of functional information, and try to solve it. They may be prejudices, they often are, but if they are honest enough in their pursuit of truth (not necessarily the general condition) they are helping.
From those who study the real nature of mutations, to those who explore rugged landscapes, to those who falsify wrong comfortable theories like the frameshift emergence of nylonase.
They are helping. They are working for scientific truth. For me, they are working for ID.
#74 BA77
“markf, pass any of the tests I listed and then will talk”
Neat way of changing the subject and avoiding answering the question! I kind of guessed you would not attempt an answer.
I guess the “tests” are these questions in #72?
1) what evidence, whatsoever, do you have that information has been generated, by purely material processes, above and beyond what was in the parent species in the first place? i.e. Have Darwinists passed the fitness test?
2) Have Darwinists falsified Abel’s null hypothesis for the generation of functional prescriptive information by purely material processes?
3) have Darwinists ever shown that prescriptive information increases with any mutation?
4) Have Darwinists ever shown that a sub-speciation was wrought by an increase in information and not by a loss in information?
5) Have Darwinists even justified using Natural Selection as to explaining the diversity of all the life on earth?
I am sure that whatever answer I give to any of these you will not rate it as a pass. So I guess you have successfully evaded answering my questions.
As it happens 1 to 4 all refer to the information in an object. My question challenges the very idea that something has an amount of information – it requires further definition – so I can’t understand the questions much less pass the test.
The answer to 5 is that almost every text book on evolution makes the case that natural selection explains the diversity of life on earth. Of course you will disagree but that is a very long debate – meanwhile I have asked you a couple of really quite simple questions (which I strongly suspect you have no idea how to answer)
markf it is no ‘long debate’, you have no evidence for an increase of information leading to speciation event period. If you disagree, SHOW THE EVIDENCE!!! As for what I do know and don’t know about properly measuring information content, do you even acknowledge that information is shown to be its own unique independent entity, separate from matter and energy by quantum teleportation, and with the refutation of the hidden variable argument? If you don’t agree, why not? and please tell me in which parallel universe that you do happen to agree with the findings. 🙂
#76
Gpuccio
I can see how investigation can reveal unexplained jumps in evolution at the molecular level. This might be interpreted as answering the question “when” did design take place (although it might simply be interpreted as “unexplained jumps”. I can see nothing in your examples about “how”. Suppose we find an incident where a gene mutates 30 base pairs simultaneously to create a totally new function. What has that told us about how it happened?
#78
markf it is no ‘long debate’, you have no evidence for an increase of information leading to speciation event period. If you disagree, SHOW THE EVIDENCE!!!
I thought we were talking about natural selection leading to the diversity of life – you seem to have changed the subject again! I am sorry – I just can’t keep up.
“As for what I do know and don’t know about properly measuring information content, do you even acknowledge that information is shown to be its own unique independent entity, separate from matter and energy by quantum teleportation, and with the refutation of the hidden variable argument? If you don’t agree, why not? and please tell me in which parallel universe that you do happen to agree with the findings.”
I freely admit I have no idea what on earth you are talking about. Now will you answer my questions?
markf, information is shown to be its own unique entity that is completely separate, and dominate, of time-space matter-energy, in quantum teleportation experiments as well as further solidified to be a unique and independent entity with the refutation of the hidden variable argument of Einstein. ,,, To measure transcendent information on the quantum scale is fairly routine nowadays. Further work is needed to get a proper ‘physical’ measure of the transcendent information encoded in life.
As to answering your questions? I was clear that you need to provide just one REAL example of information increasing over and above what is already present in a parent species by passing the fitness test or by falsifying ID as laid out by Behe. i.e. markf of what concern is it to me to talk about what hypothetically could be a validation of Darwinism if you in fact have no real examples to offer as proof. i.e. Why should I talk of something that I know to be impossible?
#81
BA77
I am not asking you to talk of something you know to be impossible. I am simply asking two questions about how you measure the information in an object – given the clear statement from the ID community that information is measured through a probability. Are you saying it is impossible to measure the information in an object?
and markf, all I am asking is that you provide JUST ONE example of material processes generating information above and beyond what was already present in the parent species. I care not to argue hypotheticals until you present ANYTHING that we may analyze.
#83
BA77 I will gladly admit I don’t know of any examples of a material process generating information above and beyond what was already present in the parent species because
(a) I wouldn’t know how to measure the information (if you would answer my questions I might have a hope)
(b) I am not a biologist or biochemist
Now I have answered you question as fully and honestly as I can. There is nothing hypothetical about my question and nothing needs analyzing. All I am asking is
(a) what specification do you use when measuring the information in an organism
(b) what prior knowledge do you take into account when calculating the information
If you don’t know the answer that is fine as well – just say so.
markf:
(a) what specification do you use when measuring the information in an organism
The ‘incomplete’ measure which is currently used, which I’m not well versed in, is:
Functional information and the emergence of bio-complexity:
Robert M. Hazen, Patrick L. Griffin, James M. Carothers, and Jack W. Szostak:
Abstract: Complex emergent systems of many interacting components, including complex biological systems, have the potential to perform quantifiable functions. Accordingly, we define ‘functional information,’ I(Ex), as a measure of system complexity. For a given system and function, x (e.g., a folded RNA sequence that binds to GTP), and degree of function, Ex (e.g., the RNA-GTP binding energy), I(Ex)= -log2 [F(Ex)], where F(Ex) is the fraction of all possible configurations of the system that possess a degree of function > Ex. Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree. In each case we observe evidence for several distinct solutions with different maximum degrees of function, features that lead to steps in plots of information versus degree of functions.
http://genetics.mgh.harvard.ed.....S_2007.pdf
Mathematically Defining Functional Information In Molecular Biology – Kirk Durston – short video
http://www.metacafe.com/watch/3995236
Entire video:
http://vimeo.com/1775160
and this paper:
Measuring the functional sequence complexity of proteins – Kirk K Durston, David KY Chiu, David L Abel and Jack T Trevors – 2007
Excerpt: We have extended Shannon uncertainty by incorporating the data variable with a functionality variable. The resulting measured unit, which we call Functional bit (Fit), is calculated from the sequence data jointly with the defined functionality variable. To demonstrate the relevance to functional bioinformatics, a method to measure functional sequence complexity was developed and applied to 35 protein families.,,,
http://www.tbiomed.com/content/4/1/47
The reason I say incomplete is that it is not a true precise measure of the transcendent information present within a lifeform that would be gained by say perhaps measuring a bacterium’s complete molecular offset from thermodynamic equilibrium and calculating the functional information bits present since,,,
“Gain in entropy always means loss of information, and nothing more.”
Gilbert Newton Lewis
as well another precise measure may be possible in that:
Notes on Landauer’s principle, reversible computation, and Maxwell’s Demon – Charles H. Bennett
Excerpt: Of course, in practice, almost all data processing is done on macroscopic apparatus, dissipating macroscopic amounts of energy far in excess of what would be required by Landauer’s principle. Nevertheless, some stages of biomolecular information processing, such as transcription of DNA to RNA, appear to be accomplished by chemical reactions that are reversible not only in principle but in practice.,,,,
http://www.hep.princeton.edu/~.....501_03.pdf
and,,
Landauer’s principle
Of Note: “any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase ,,, Specifically, each bit of lost information will lead to the release of an (specific) amount (at least kT ln 2) of heat.,,
http://en.wikipedia.org/wiki/L....._principle
yet there are problems in extending the thermodynamic measure of Landauer to biology in regards to getting a precise measure of exactly when information is lost in a cell,,,
Landauer’s Principle and Divergenceless Dynamical Systems – 2009
The profound links between Landauer’s principle and the second law of thermodynamics [21] suggest
that the present results may help to explore analogues of the second law in non-standard contexts, like
the biological ones discussed in [26, 27].
The lack of sub-additivity exhibited by some important non-logarithmic information or entropic functionals
seems to be a serious difficulty for deriving generalizations of Landauer’s principle in terms of the
non-standard maxent formalisms that are nowadays popular for the study of non-equilibrium, meta-stable
states. On the other hand, the Beck-Cohen approach allows for the extension of Landauer’s principle to
some of those scenarios. This important issue, however, needs further exploration. In this regard, any
new developments towards a valid formulation of Landauer-like principles directly based upon generalized,
non-standard entropic measures are very welcome.
http://www.up.ac.za/dspace/bit.....auer(2009).pdf
Myself I feel fairly confident that if these problems can be worked out, and a precise enough measurement device could be built to measure such tiny fluctuations of temperature, if it is not already built, say some precise laser instrument,,, then a true measure of the ‘physicality’ of information in life could be carried out. Until then, for us to kick around functional information bits (FITS), which are derived from a incomplete knowledge of probability is to miss the mark for a true measure of the transcendent information present in life.
you then ask:
(b) what prior knowledge do you take into account when calculating the information,,,
The prior knowledge (presupposition) I take into account, is that I hold that the transcendent information in life is physically dominate of the matter and energy, therefore I should never expect that which is lesser in quality of its existence to produce that which is greater in its existence. etc… etc.. etc..
#85 BF
Thanks – I think we have taken this subject as far as it is going to go.
BA:
I’ve pulled out Hoyle’s book. On pages 99-101, he deals with a straightforward calculation of probabilities concerning selectable mutations. He writes:
“The chance of setting a particular base pair in a particular gene in G generations is ~10^-9G, and the chance that two base pairs are set right in the same gene is ~(10^-9)^2. For a total of 2N genes in a population of N individuals the probability of one emerging in a repaired condition after G generations is therefore ~2N(10^-9)^2, which to be of order unity requires G=~ 10^9/(2N)1/2. A mammalian population with 2N=10^6 would require G=~10^6 generations, which is so long that further errors would accumulate in every individual before the two base pairs were corrected in any individual. . . . From this example we can say that for any discarded gene properly to be recovered in a practical situation, it is necessary that the genes in quesiton shall not differ from a working condition by more than one or two base-pair errors. Once genes drift by more than this from a working condition they can be considered to have gone permanently dead, therby explaining an otherwise mysterious conclusion of classical biology, that once species become highly specialized they tend to become extinct.”
Hope this helps your collection! 😉
Thanks, it does, and properly collected and filed. 🙂
Mark:
Suppose we find an incident where a gene mutates 30 base pairs simultaneously to create a totally new function. What has that told us about how it happened?
I suppose that a single occurrence would not say anything definitive, but if that were the constant observation in thousands of cases, an explanation would certainly be necessary.
ID would remain obviously loyal to the design explanation, and would try to get more details from facts on possible models of implementation (guided mutations, artificial selection, or others).
Non IDists would be free to try some other explanation.
markf, if you care I tracked down the entropic/information measurement,,,, measuring from a thermodynamic perspective (a more accurate measure for ascertaining a ‘true’ total information content) the information content of a ‘simple’ bacterium is found to be 10^12 bits, comparable to about 100 million pages of the encyclopedia Britannica:
Moleular Biophysics – Information theory. Relation between information and entropy.
http://www.astroscu.unam.mx/~a.....ecular.htm
Carl Sagan, Cornell, “The information content of a simple cell has been estimated as around 10^12 bits, comparable to about a hundred million pages of the Encyclopaedia Britannica.”, Life,
http://www.bible.ca/tracks/dp-lawsScience.htm
proof of principle:
Information can overcome the local limits of the second law on the molecular level:
Maxwell’s ID Demon Converts Info to Energy
Excerpt: Sano said the experiment does demonstrate that information can be used as a medium for transferring energy.”
http://www.creationsafaris.com.....#20101116a