Professor Andy McIntosh, an ID proponent in the UK, has a peer-reviewed paper on the thermodynamic barriers to Darwinian evolution:
A. C. McIntosh, “Information and Entropy—Top-Down or Bottom-Up Development in Living Systems?” International Journal of Design & Nature and Ecodynamics 4(4) (2009): 351-385
The Editor appends the following note:
Editor’s Note: This paper presents a different paradigm than the traditional view. It is, in the view of the Journal, an exploratory paper that does not give a complete justification for the alternative view. The reader should not assume that the Journal or the reviewers agree with the conclusions of the paper. It is a valuable contribution that challenges the conventional vision that systems can design and organise themselves. The Journal hopes that the paper will promote the exchange of ideas in this important topic. Comments are invited in the form of ‘Letters to the Editor’.
Here is the abstract:
Abstract: This paper deals with the fundamental and challenging question of the ultimate origin of genetic information from a thermodynamic perspective. The theory of evolution postulates that random mutations and natural selection can increase genetic information over successive generations. It is often argued from an evolutionary perspective that this does not violate the second law of thermodynamics because it is proposed that the entropy of a non-isolated system could reduce due to energy input from an outside source, especially the sun when considering the earth as a biotic system. By this it is proposed that a particular system can become organised at the expense of an increase in entropy elsewhere. However, whilst this argument works for structures such as snowflakes that are formed by natural forces, it does not work for genetic information because the information system is composed of machinery which requires precise and non-spontaneous raised free energy levels – and crystals like snowflakes have zero free energy as the phase transition occurs. The functional machinery of biological systems such as DNA, RNA and proteins requires that precise, non-spontaneous raised free energies be formed in the molecular bonds which are maintained in a far from equilibrium state. Furthermore, biological structures contain coded instructions which, as is shown in this paper, are not defined by the matter and energy of the molecules carrying this information. Thus, the specified complexity cannot be created by natural forces even in conditions far from equilibrium. The genetic information needed to code for complex structures like proteins actually requires information which organises the natural forces surrounding it and not the other way around – the information is crucially not defined by the material on which it sits. The information system locally requires the free energies of the molecular machinery to be raised in order for the information to be stored. Consequently, the fundamental laws of thermodynamics show that entropy reduction which can occur naturally in non-isolated systems is not a sufficient argument to explain the origin of either biological machinery or genetic information that is inextricably intertwined with it. This paper highlights the distinctive and non-material nature of information and its relationship with matter, energy and natural forces. It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate.
Of interest to this concluding statement of McIntosh:
It is now shown in Quantum teleportation, and entanglement, experiments, especially with the refutation of the “hidden variable” argument, that “transcendent information” is its own unique, and independent, entity completely separate from matter and energy. A entity which clearly exercises dominion of matter and energy at this most base level of reality. As well “transcendent information” is also shown to occupy the primary framework of reality (highest dimension), as far as space and time are concerned, that can be ascertained.
Can’t resist the opportunity to plug my book and video “Can ANYTHING Happen in an Open System?” , where I have come to very similar conclusions.
Granville, you were the first person I thought of when I saw the article.
bornagain77 @ 1,
If your description of non-material cause of information does not convince that ID has no reliance on, or relationship to, the supernatural then we have to conclude that ID-opponents probably has nothing but the false dilemma of the supernatural to present.
I so wish this debate to be over and we can start doing science without wasting intellectual and monetary resources on anti-scientific ideologies like Darwinism. In the mean time we should honor the editor of this publication for promoting constructive dialogue.
I’m always confused about how information is quantified. For example, if you ran Shakespeare’s works through a spell checker and regularized the spelling, would the amount of information change?
What if a typographical error in preparing a manuscript changed the spelling of a word to another variant used by the same author?
Along those lines, under what circumstances would a copy error in biological reproduction change the quantity of information?
Petrushka-
Adding a meaningful word would increase information would it not? Of course, that’s not enough. The new word would have to make sense within the sentence and within context.
Petrushka – would you be straining at gnats and swallowing camles if I misspelled a word or two here so that you would apply your free will to ignore any part of or all of this information and to what degree would or could it affect your entire life? (hope I made sense)
mullerpr – please for me – elaborate on ” false dilemma of the supernatural” thanks
I don’t think it’s straining at gnats to ask how information is measured, assuming it’s quantity is important.
How else could you make claims about entropy?
Petrushka,
Information theory can only help to compare the patterns of the initial state with that of the changed state. Information theory is actually the science of patterns, pattern theory would have been far less confusing. However it is very helpful in communications and information technology.
Personally I think “pattern theory” should also be a starting point for quantifying semantic information, like Shakespeare’s works. Something else that might be useful is the techniques that are used to decipher “scrambled” messages, by intelligence agencies. I don’t know what these techniques are, but actively studying messages to extract meaning is not an unknown effort of the human mind.
Ultimately, if it remains impossible to quantify meaning… It should not be a surprise because it could just as well mean that information or the product of intelligence does not have a mechanistic origin.
Hi alan,
The “false dilemma of the supernatural” elude to the fact that so many anti-ID proponents claim that ID presuppose the supernatural, which is a false dilemma.
A non-material origin of information is not a presupposition for ID, as far as I can see. The only presupposition of ID, that I know of, is that ID does not a priory disregard causes not yet defined by science as being physical. At this stage all evidence points to the fact that life is/was caused by such a non-material cause.
Mullepr @ 10
you say “The only presupposition of ID, that I know of, is that ID does not a priory disregard causes not yet defined by science as being physical.”
Of course ID won’t “disregard” causes not defined by science as being physical. Why is that? Because ID has no personally motivated, philosophically constrained boundary called “materialism” that prevents it from going where the evidence best points.
And your use of the phrase “not yet defined by science as being physical” seems a bit loaded, front loaded. Why is it necessary for anything that science defines, to be physical? There may be a cause that is not physical, that is just as scientifically valid as any materialistic presupposition. Should not science be just as interested in an accurate representation of any possible state of affairs, not just physical ones?
Your statement seems to reflect some forlorn hope that that science will define all things as physical. When considering the preponderance of evidence in recent years that strongly supports a design paradigm, I think the implications for science are very exciting.
Just think….Someday in the future, long after we and our grandchildren have come and gone, the scientific paradigm may be so “non-materialist” based, that people will look back on 19th, 20th and 21st century “science” as being as archaic as many now consider Darwinism to be. In this future non-materialist paradigm, strict materialism will be considered by many to be a religious faith.
In fact, strict materialism seems to exist today not on the basis of any positive, good science but because of negative, emotionally charged, anti-religious arguments against non-materialist causation. In our non-materialist-based future world of science, we may look back on the rants of Dawkins and Hitchens & Co, with sympathy.
Entropy is a generalization based on many decades of observation and measurement. It becomes a law not because it satisfies some axiom or principle, but because countless measurements confirm the math.
It seems somehow wrong to evoke the term entropy to denote something that cannot be quantified. And it seems doubly wrong to do so before there is any broadly accepted units of measurement and a large base of published measurements.
It seems somehow wrong to evoke the term
entropyevolution to denote something that cannot be quantified. And it seems doubly wrong to do so before there is any broadly accepted units of measurement and a large base of published measurements.Professional evolutionary biologists are hard-pressed to cite even one clear-cut example of evolution through a beneficial mutation to the DNA of humans which would violate the principle of genetic entropy. Although a materialist may try to claim the lactase persistence mutation as a lonely example of a “truly” beneficial mutation in humans, lactase persistence is actually a loss of a instruction in the genome to turn the lactase enzyme off, so the mutation clearly does not violate Genetic Entropy. Yet at the same time, the evidence for the detrimental nature of mutations in humans is overwhelming for scientists have already cited over 100,000 mutational disorders.
Inside the Human Genome: A Case for Non-Intelligent Design – Pg. 57 By John C. Avise
Excerpt: “Another compilation of gene lesions responsible for inherited diseases is the web-based Human Gene Mutation Database (HGMD). Recent versions of HGMD describe more than 75,000 different disease causing mutations identified to date in Homo-sapiens.”
I went to the mutation database website cited by John Avise and found:
HGMD®: Now celebrating our 100,000 mutation milestone!
http://www.biobase-internation.....mddatabase
I really question their use of the word “celebrating”.
(Of Note: The number for Mendelian Genetic Disorders is quoted to be over 6000 by geneticist John Sanford in 2010)
“No human investigation can be called true science without passing through mathematical tests.”
Leonardo Da Vinci
Evolution vs. Genetic Entropy – video
http://www.metacafe.com/watch/4028086
Using Computer Simulation to Understand Mutation Accumulation Dynamics and Genetic Load:
Excerpt: We apply a biologically realistic forward-time population genetics program to study human mutation accumulation under a wide-range of circumstances. Using realistic estimates for the relevant biological parameters, we investigate the rate of mutation accumulation, the distribution of the fitness effects of the accumulating mutations, and the overall effect on mean genotypic fitness. Our numerical simulations consistently show that deleterious mutations accumulate linearly across a large portion of the relevant parameter space.
http://bioinformatics.cau.edu......aproof.pdf
MENDEL’S ACCOUNTANT: J. SANFORD†, J. BAUMGARDNER‡, W. BREWER§, P. GIBSON¶, AND W. REMINE
http://mendelsaccount.sourceforge.net
http://www.scpe.org/vols/vol08/no2/SCPE_8_2_02.pdf
Mutations happen all the time. You probably carry several.
And yet populations don’t seem to decline due to infertility. Hundreds of species are known to have gone extinct, but I can’t think of one that did so due to a population wide decline in fertility.
Species, of course, don’t reproduce at the same rates. If genomes drifted inevitably toward decay, it would seem that those organisms that reproduce the fastest would be most affected. It’s something of a wonder that there are any bacteria at all.
I think if you are going to publish a quasi-mathematical concept like genetic entropy, you need to have some generally accepted measure of entropy, or at least a measure of fitness.
If a putative change in fitness for a population doesn’t result in population decline, I fail to see how it can have any objective validity.
And if it doesn’t affect the fastest reproducers and the populations that accumulate mutations the fastest, I wonder how it can be considered to be a valuable idea.
“It’s not denial. I’m just selective about the reality I accept.” – Bill Watterson, Creator of Calvin and Hobbes.
The Matrix: Neo Meets Morpheus
http://www.youtube.com/watch?v=3VFDIKgm_QI
Petrushka:
I expect the significant number of people suffering from genetic-based illnesses might disagree with your dismissal of their objective reality. There’s more to existence than procreation.
Petrushka:
Think of a city with an increasing population. As the traffic increases, the percentage of people who make it to work still seems to stay around 100% of those attempting to do so. But the cost and difficulty increases, and people spend more and more time sitting in their cars.
The reason we don’t see population declines is because the drive to reproduce is powerful, and still overcomes the “resistance”. It doesn’t mean, however that the cost of mutations is not increasing, and it is in determining the cost that the object reality of genetic entropy might be calculated. How many are sick? How many die young? How many are infertile? Are these percentage stable or increasing? It may be difficult to determine these numbers, but it doesn’t mean it cannot be done.
Feel free to publish a metric. The number of people sick from genetic disorders reflects our compassion and our developing medical technology, which enables very sick people to survive.
If genetic entropy caused a general decline in population fitness, it would be reflected in declining population numbers throughout all living species.
By mathematical logic, it should most severely affect populations that reproduce rapidly and have the most mutations. And that would be microbes.
Insects should also be severely affected.
If the cost of mutations is increasing, make some measurements in the most vulnerable populations.
I hate to repeat, but the concept of entropy is not based on axiomatic reasoning or first principles. It is based on careful measurements.
Petrushka:
No, the length of survival people may be extended by technology, but the number of people born ill is independent of it, unless you happen to blame technology for causing many such mutational illnesses in the first place.
The sieve of survival works much more brutally and effectively on bacteria than people. Bacteria don’t protect their ill and build hospitals. That might explain the fact they are still around despite the short generation time (and it is the generation time, not the population that is most significant). The eugenics movement would return us to the law of bacterial survival.
And who is to say that genetic entropy does not severely affect bacterial populations? Do you know the percent of bacteria that fail to develop due to mutational damage? For now, at least the fecundity of such populations makes loss due to mutational damage irrelevant.
And evolution is a gross violation of entropy yet you accept evolution as true without so much as a ripple of doubt, interesting how selective your vision is.
the slow accumulation of “slightly detrimental mutations” in humans, which are far below the power of natural selection to remove from our genomes, is revealed by this following fact:
“When first cousins marry, their children have a reduction of life expectancy of nearly 10 years. Why is this? It is because inbreeding exposes the genetic mistakes within the genome (slightly detrimental recessive mutations) that have not yet had time to “come to the surface”. Inbreeding is like a sneak preview, or foreshadowing, of where we are going to be genetically as a whole as a species in the future. The reduced life expectancy of inbred children reflects the overall aging of the genome that has accumulated thus far, and reveals the hidden reservoir of genetic damage that have been accumulating in our genomes.” – Sanford; Genetic Entropy; page 147
The same problem of inbreeding in animal husbandry produces the same decline of fitness, I would dig up the numbers but it is late and I am fairly sure you wouldn’t listen anyway. Bacteria have a much slower rate of decline, because of several intertwined reasons, but the decline is detectable by loss of fitness over millions of years.
Can you name a population — any species — for which this is not true?
In physics, entropy started as an observation, was tested for decades, and became a “Law” only after it was impossible to fine an exception.
Genetic entropy seems to have originated as an axiom and it seems to survive despite being contradicted by evidence.
The claim is that harmful mutations accumulate indefinitely. It is backed by a computer model.
If the model reflects reality, it should be rather easy to point to populations that are declining due to infertility. Infertility caused by the accumulation of harmful mutations.
Inbreeding and low population numbers are a definite problem for a species.
Microbes, however, are the ultimate inbreeders. They clone themselves.
And yet they do not go extinct due to genetic entropy. In fact, a single microbe can give rise to a diverse population, even produce variations capable of exploiting a new source of food.
I won’t be able to post on Saturday. I suppose you won’t miss me.
Petruska you state:
“Genetic entropy seems to have originated as an axiom and it seems to survive despite being contradicted by evidence.”
But you provide no evidence, in fact you never cite anything.
you then state:
“If the model reflects reality, it should be rather easy to point to populations that are declining due to infertility. Infertility caused by the accumulation of harmful mutations.”
When we look at the fossil record over long periods of time, so as to get a clear view of what is happening “in reality” we see:
The following article is important in that it shows the principle of Genetic Entropy being obeyed in the fossil record by Trilobites, over the 270 million year history of their life on earth (Note: Trilobites are one of the most prolific “kinds” found in the fossil record with an extensive worldwide distribution. They appeared abruptly at the base of the Cambrian explosion with no evidence of transmutation from the “simple” creatures that preceded them, nor is there any evidence they ever produced anything else besides other trilobites during the entire time they were in the fossil record).
The Cambrian’s Many Forms
Excerpt: “It appears that organisms displayed “rampant” within-species variation “in the ‘warm afterglow’ of the Cambrian explosion,” Hughes said, but not later. “No one has shown this convincingly before, and that’s why this is so important.””From an evolutionary perspective, the more variable a species is, the more raw material natural selection has to operate on,”….(Yet Surprisingly)….”There’s hardly any variation in the post-Cambrian,” he said. “Even the presence or absence or the kind of ornamentation on the head shield varies within these Cambrian trilobites and doesn’t vary in the post-Cambrian trilobites.” University of Chicago paleontologist Mark Webster; article on the “surprising and unexplained” loss of variation and diversity for trilobites over the 270 million year time span that trilobites were found in the fossil record, prior to their gradual and total extinction from the fossil record about 250 million years ago.
http://www.terradaily.com/repo.....s_999.html
In fact, the loss of morphological traits over time, for all organisms found in the fossil record, was so consistent that is was made into a scientific “law”:
Dollo’s law and the death and resurrection of genes:
Excerpt: “As the history of animal life was traced in the fossil record during the 19th century, it was observed that once an anatomical feature was lost in the course of evolution it never staged a return. This observation became canonized as Dollo’s law, after its propounder, and is taken as a general statement that evolution is irreversible.” http://www.pnas.org/content/91.....l.pdf+html
A general rule of thumb for the “Deterioration/Genetic Entropy” of Dollo’s Law as it applies to the fossil record is found here:
Dollo’s law and the death and resurrection of genes
ABSTRACT: Dollo’s law, the concept that evolution is not substantively reversible, implies that the degradation of genetic information is sufficiently fast that genes or developmental pathways released from selective pressure will rapidly become nonfunctional. Using empirical data to assess the rate of loss of coding information in genes for proteins with varying degrees of tolerance to mutational change, we show that, in fact, there is a significant probability over evolutionary time scales of 0.5-6 million years for successful reactivation of silenced genes or “lost” developmental programs. Conversely, the reactivation of long (>10 million years)-unexpressed genes and dormant developmental pathways is not possible unless function is maintained by other selective constraints;
http://www.pnas.org/content/91.....l.pdf+html
Dollo’s Law was further verified to the molecular level here:
Dollo’s law, the symmetry of time, and the edge of evolution – Michael Behe
Excerpt: We predict that future investigations, like ours, will support a molecular version of Dollo’s law: ,,, Dr. Behe comments on the finding of the study, “The old, organismal, time-asymmetric Dollo’s law supposedly blocked off just the past to Darwinian processes, for arbitrary reasons. A Dollo’s law in the molecular sense of Bridgham et al (2009), however, is time-symmetric. A time-symmetric law will substantially block both the past and the future,”. http://www.evolutionnews.org/2.....f_tim.html
Petruska you then state:
“And yet they (bacteria) do not go extinct due to genetic entropy. In fact, a single microbe can give rise to a diverse population, even produce variations capable of exploiting a new source of food.
Actually there are ancient bacteria that can no longer be found on the earth that were present millions of years ago i.e. thus they, as far as we can tell, are extinct:
World’s Oldest Known DNA Discovered (419 million years old) – Dec. 2009
Excerpt: But the DNA (of the 250 million Year Old bacteria) was so similar to that of modern microbes that many scientists believed the samples had been contaminated. Not so this time around. A team of researchers led by Jong Soo Park of Dalhousie University in Halifax, Canada, found six segments of identical DNA that have never been seen before by science. “We went back and collected DNA sequences from all known halophilic bacteria and compared them to what we had,” Russell Vreeland of West Chester University in Pennsylvania said. “These six pieces were unique,,,
http://news.discovery.com/eart.....vered.html
Vreeland was referencing this earlier work of his on ancient bacteria as to the accusations of contamination:
The Paradox of the “Ancient” Bacterium Which Contains “Modern” Protein-Coding Genes:
“Almost without exception, bacteria isolated from ancient material have proven to closely resemble modern bacteria at both morphological and molecular levels.” Heather Maughan*, C. William Birky Jr., Wayne L. Nicholson, William D. Rosenzweig§ and Russell H. Vreeland ;
http://mbe.oxfordjournals.org/...../19/9/1637
Yet when I asked Vreeland about a fitness test on these ancient bacteria he said “only a creationist would ask that question” and then he lectured me a little without ever giving me a straight answer to my question. I thought the question was a fairly important and straight forward question that was “scientifically neutral”,, i.e. did the bacteria gain or lose functional complexity? Seems important to me. Anyway no luck with Vreeland, so I then asked Dr. Cano, who works with ancient bacteria, that are amber sealed, about a fitness test on the ancient bacteria and he graciously replied to me:
In reply to a personal e-mail from myself, Dr. Cano commented on the “Fitness Test” I had asked him about:
Dr. Cano stated: “We performed such a test, a long time ago, using a panel of substrates (the old gram positive biolog panel) on B. sphaericus. From the results we surmised that the putative “ancient” B. sphaericus isolate was capable of utilizing a broader scope of substrates. Additionally, we looked at the fatty acid profile and here, again, the profiles were similar but more diverse in the amber isolate.”:
Fitness test which compared the 30 million year old ancient bacteria to its modern day descendants, RJ Cano and MK Borucki
Thus, the most solid evidence available for the most ancient DNA scientists are able to find does not support evolution happening on the molecular level of bacteria. In fact, according to the fitness test of Dr. Cano, the change witnessed in bacteria conforms to the exact opposite, Genetic Entropy; a loss of functional information/complexity, since fewer substrates and fatty acids are utilized by the modern strains. Considering the intricate level of protein machinery it takes to utilize individual molecules within a substrate, we are talking an impressive loss of protein complexity, and thus loss of functional information, from the ancient amber sealed bacteria.
According to prevailing evolutionary dogma, there “HAS” to be “significant genetic/mutational drift” to the DNA of bacteria within 250 million years, even though the morphology (shape) of the bacteria can be expected to remain the same. In spite of their preconceived materialistic bias, scientists find there is no significant genetic drift from the ancient DNA. I find it interesting that the materialistic theory of evolution expects there to be a significant amount of mutational drift from the DNA of ancient bacteria to its modern descendants, while the morphology can be allowed to remain exactly the same with its descendants. Alas for the materialist once again, the hard evidence of ancient DNA has fell in line with the anthropic hypothesis.
Petruska you then allude to bacteria utilizing a “new food source”. Since you, as usual, cited no source for your claim, I will guess you are talking of Nylonase. Yet:
Nylon Degradation – Analysis of Genetic Entropy
Excerpt: At the phenotypic level, the appearance of nylon degrading bacteria would seem to involve “evolution” of new enzymes and transport systems. However, further molecular analysis of the bacterial transformation reveals mutations resulting in degeneration of pre-existing systems.
http://www.answersingenesis.or.....n-bacteria
As well:
The non-randomness and “clockwork” repeatability of the nylon adaptation clearly indicates a designed mechanism that fits perfectly within the limited “variation within kind” model of Theism, and stays well within the principle of Genetic Entropy since the parent strain is still more fit for survival once the nylon is consumed from the environment. (Answers In Genesis) i.e. Evolutionists need to show a gain in functional complexity over and above what is already present in the parent strain, not just a variation from the parent kind that does not exceed functional complexity:
Is Antibiotic Resistance evidence for evolution? – “Fitness Test” – video
http://www.metacafe.com/watch/3995248
As well Petruska, Lenski’s work with “cuddled” e-coli, when looked at closely, minus the evolutionary spin, clearly reveals genetic entropy:
These following articles refute Lenski’s supposed “evolution” of the citrate ability for the E-Coli bacteria after 20,000 generations of the E-Coli:
Multiple Mutations Needed for E. Coli – Michael Behe
Excerpt: As Lenski put it, “The only known barrier to aerobic growth on citrate is its inability to transport citrate under oxic conditions.” (1) Other workers (cited by Lenski) in the past several decades have also identified mutant E. coli that could use citrate as a food source. In one instance the mutation wasn’t tracked down. (2) In another instance a protein coded by a gene called citT, which normally transports citrate in the absence of oxygen, was overexpressed. (3) The overexpressed protein allowed E. coli to grow on citrate in the presence of oxygen. It seems likely that Lenski’s mutant will turn out to be either this gene or another of the bacterium’s citrate-using genes, tweaked a bit to allow it to transport citrate in the presence of oxygen. (He hasn’t yet tracked down the mutation.),,, If Lenski’s results are about the best we’ve seen evolution do, then there’s no reason to believe evolution could produce many of the complex biological features we see in the cell.
http://www.amazon.com/gp/blog/.....96N278Z93O
Lenski’s e-coli – Analysis of Genetic Entropy
Excerpt: Mutants of E. coli obtained after 20,000 generations at 37°C were less “fit” than the wild-type strain when cultivated at either 20°C or 42°C. Other E. coli mutants obtained after 20,000 generations in medium where glucose was their sole catabolite tended to lose the ability to catabolize other carbohydrates. Such a reduction can be beneficially selected only as long as the organism remains in that constant environment. Ultimately, the genetic effect of these mutations is a loss of a function useful for one type of environment as a trade-off for adaptation to a different environment.
http://www.answersingenesis.or.....n-bacteria
Lenski’s Citrate E-Coli – Disproof of “Convergent” Evolution – Fazale Rana – video
http://www.metacafe.com/watch/4564682
Upon closer inspection, it seems Lenski’s “cuddled” E. coli are actually headed for “genetic meltdown” instead of evolving into something better.
New Work by Richard Lenski:
Excerpt: Interestingly, in this paper they report that the E. coli strain became a “mutator.” That means it lost at least some of its ability to repair its DNA, so mutations are accumulating now at a rate about seventy times faster than normal.
http://www.evolutionnews.org/2.....enski.html
further note:
The Sheer Lack Of Evidence For Macro Evolution – William Lane Craig – video
http://www.metacafe.com/watch/4023134
SCheesman:
Petrushka:
Well if that is true of all species, then trying to measure the effect on populations of mutational damage is impossible.
Impossible, that is until the resiliancy of the population to reproduce can no longer overcome the losses due to mutational entropy, and then you’d get population decline and extinction in rapid order. That, too is simple mathematics.
This doesn’t mean that mutational entropy is not real, it just means you need a different metric to measure it than population growth/decline. Other factors are far more important in determining populations, except in the critical case.
SCheeseman thanks for pointing me to this line of reasoning of using other metrics. And the second link I clicked on for declining birth rates revealed:
Reproductive health in the United States is headed in the wrong direction on a host of indicators. Fertility problems, miscarriages, preterm births, and birth defects are all up. These trends are not simply the result of women postponing motherhood. In fact, women under 25 and women between 25 and 34 reported an increasing number of fertility problems over the last several decades. Nor are reproductive health problems limited to women. Average sperm count appears to be steadily declining, and there are rising rates of male genital birth defects such as hypospadias, a condition in which the urethra does not develop properly.
http://www.americanprogress.or.....lette.html
The blame the cause on chemicals which of course implicates detrimental mutations, which or course implicates genetic entropy.
BA77: The blame the cause on chemicals which of course implicates detrimental mutations, which or course implicates genetic entropy.
Acispencer you state: “To think that chemical exposure automatically implicates genetic damage is naive at best.”
So acispencer what do we know for sure? We know for a fact that reproductive health is steadily declining, whether this is ALL due to chemicals I really question since it could just as well be reflective of the “natural” mutational load that has been accumulating. i.e. if chemicals are playing any significant role, in such a nation-wide catastrophe, I would say the most reasonable explanation is that the environmental chemicals, whatever they are, of national scale (they don’t say for sure),, are merely exasperating, and reflecting) a already existent problem of a steady decline in birth rates. In fact, you have no nationwide chemical agent to blame for such a widespread increase in birth defects, that I know of, whereas I do have a effect that touches every single person in this nation which can explain the effect in question more satisfactorily: As well though you allude to teratogenesis (which I believe is defects brought about during development due to exposure to chemicals) you neglect the mutagenic effect chemicals have on DNA. i.e. are the reproductive DNA molecules somehow immune from the detrimental effects of chemicals that cause gross deformities? Please tell me of this unknown barrier that gives the sperm and egg such added protection that is not visited on the embryo itself. The overall pattern of evidence itself is clearly in favor of the Genetic Entropy model:
the evidence for the detrimental nature of mutations in humans is overwhelming for scientists have already cited over 100,000 mutational disorders.
Inside the Human Genome: A Case for Non-Intelligent Design – Pg. 57 By John C. Avise
Excerpt: “Another compilation of gene lesions responsible for inherited diseases is the web-based Human Gene Mutation Database (HGMD). Recent versions of HGMD describe more than 75,000 different disease causing mutations identified to date in Homo-sapiens.”
I went to the mutation database website cited by John Avise and found:
HGMD®: Now celebrating our 100,000 mutation milestone!
http://www.biobase-internation.....mddatabase
I really question their use of the word “celebrating”.
(Of Note: The number for Mendelian Genetic Disorders is quoted to be over 6000 by geneticist John Sanford in 2010)
“Mutations” by Dr. Gary Parker
Excerpt: human beings are now subject to over 3500 mutational disorders. (this 3500 figure is cited from the late 1980’s)
http://www.answersingenesis.or.....ations.asp
Human Evolution or Human Genetic Entropy? – Dr. John Sanford – video
http://www.metacafe.com/w/4585582
This following study confirmed the “detrimental” mutation rate for humans per generation, of 100 to 300, estimated by John Sanford in his book “Genetic Entropy” in 2005:
Human mutation rate revealed: August 2009
Every time human DNA is passed from one generation to the next it accumulates 100–200 new mutations, according to a DNA-sequencing analysis of the Y chromosome. (Of note: this number is derived after “compensatory mutations”)
http://www.nature.com/news/200.....9.864.html
This mutation rate of 100 to 200 is far greater than even what evolutionists agree is an acceptable mutation rate for an organism:
Beyond A ‘Speed Limit’ On Mutations, Species Risk Extinction
Excerpt: Shakhnovich’s group found that for most organisms, including viruses and bacteria, an organism’s rate of genome mutation must stay below 6 mutations per genome per generation to prevent the accumulation of too many potentially lethal changes in genetic material.
http://www.sciencedaily.com/re.....172753.htm
Petrushka (#5):
I’m always confused about how information is quantified. For example, if you ran Shakespeare’s works through a spell checker and regularized the spelling, would the amount of information change?
What if a typographical error in preparing a manuscript changed the spelling of a word to another variant used by the same author?
Along those lines, under what circumstances would a copy error in biological reproduction change the quantity of information?
I think you are repeating here more clearly a question you already made in the Ayala thread, and I would like to answer it here.
To do that, I have to cite here again my definition of dFSCI amd of its measure (excuse me for continuosly quoting myself, it’s just to avoid repeating each time anew things already clarified).
So, here is my definition:
For all these reasons, I have chosen to debate only a very specific subset of CSI, where all these difficulties are easily overcome. That subset is dFSCI. A few comments about this particular type of CSI:
1) The specification has to be functional. In other words, the information is specified because it conveys the intructions for a specific function, one which can be recognized and defined and objectively measured as present or absent, if necessary using a quantitative threshold. It is interesting to onserve that the concept of functional specification is earlier than Dembski’s work.
2) The information must be digital. Tha avoids all the problems with analo information, and allows an easy quantification of the search space and of the complexity.
3) The information must not be significantly compressible: in other words, it cannot be the output of an algorithm based on the laws of necessity.
4) If we want to be even more restrictive, I would say that the information must be symbolic. In other words, it has to be interpreted through a conventional code to convey its meaning.
And here is the definition of the measure:
3) CSI in the sense I have given is certainly an objective measure. The measure only requires:
a) an objective definition of a function, and an objective way to ascertain it. For an enzyme, that will be a clear definition of the enzymatic activity in standard conditions, and a threshold for that activity. The specification value will be binary (1 if present, 0 if not).
b) A computation of the minimal search space (for a protein of length n, that would be at least 20^n).
c) A computation, or at least a reasonable approximation, of the number of specific functional sequences: in other words, the number of different protein sequences of maximum length n which exhibit the function under the above definitions.
The negative logarithm of (c/b) * a will be the measure of the specified complexity. It should be higher than a conventional threshold (a universal threshold of 10^150 is fine, but a biological threshold can certainly be much lower).
For a real, published computation of CSI in proteins in the above sense with a very reasonable method, please see:
Measuring the functional sequence complexity of proteins.
by Durston KK, Chiu DK, Abel DL, Trevors JT
Theor Biol Med Model. 2007 Dec 6;4:47.
freely available online at:
http://www.ncbi.nlm.nih.gov/pm…..ool=pubmed”
Finally, for those who ask about units, it should be obvious that the complexity is measure in a way which is similar to the way we measure Shannon’s entropy, in bits, with the difference that specification must be present (must have value 1), otherwise there is no functional complexity.
So, let’s start from that and try to apply it to Shakespeare’s work. To make the discussion easier, let’s speak just of Hamlet.
First of all, let’s start with point b), which is the easiest to calculate. What is the minimal search space of the text of Hamlet?
I have pasted an electronic version of the text in Word, and counted the characters, including spaces: they are 172,309. Considering, for simplicity, the alphabet (including punctuatuion) at 30 characters (they are probably a little bit more), the whole search space would be 30^172309, which is about 2^844314. So the complexity of the minimal search space is about 844314 bits.
Now, let’s go to point a). Is Hamlet specified? Certainly yes, and so we can give a value of 1 to the specification coefficient. But, unfortunately, there are many different ways to define the function, which are all valid, but each of it will have different consequences on our computation of point c). For simplicity, I will use here a “weak” definition, which is maybe the simplest, coupled to a “strong” functional test:
– Any string which conveys to a reader all the story of the drama, without losing any detail.
We could have added that no emotional connotation, or artistic efficacy, or linguistic detail and connotation be lost, but that would be more difficult to define objectively. So, let’s stick to our first definition. That implies also a possible (strong) test to verify function: in other words, if any reader with a modified version of the text is not able to answer correctly any explicit question about what happens in the drama, even about details, we will assume some loss of function and shift the value of the specification coefficient to 0.
That is obviously rather strong. We could define a lower threshold, specifying that the reader must be able to answer correctly about all questions about the character’s actions, but not necessarily about their exact words, or give any suitable definition we want. The possibility to give different definitions of the functional specification is implicit in the fact that the function is recognized and defined by a conscious observer. The explicit definition can vary, and indeed in the case of language, as we have seen, there are really many possibilities.
That is not a problem, because the measure of dFSCI is in any case pertinent to a single explicit definition of function. So, the measure will vary according to the definition it is based upon, but will be objectively defined for each explicit definition. Any explicit definition of function can be used, provided it is consistent with the model one is deriving from it.
This point is important, so let’s see how it would apply in the case of proteins. Here the definition of function is easier: it is usually the recongized function of the protein, such as its enzynatic activity. It is usually specific and well known. But, to measure dFSCI, we still have to give a quantitative threshhold to ascertain the fucntion and give a value of 1 or 0 to the specification coefficient. That is somewhat arbitrary, but if our purpose is to measure function which would be selectable in vivo, than we can put the threshold at a value which is reasonably consistent with that point. In other words, we can use any functional threshold we want, but the important point is that the conclusions we will derive from the measurement in our model are consistent with the value we used.
So, let’s go back to Hamlet. We have our point b), the search space, and we have our point a), an operational definition and a quantitative test.
According to a) the text of Hamlet, in its native form, is definitely specified (value = 1).
How big is c), the size of the functional space, the target space? This is always the most difficult part. I would like anyway to mention here that our definitions give us an explicit way of measuring it: to test all the possible strings of that length (or less) and to define as functional all those which will satisfy our functional test. And just count them.
Unfortunately, that computation, while perfectly possible in principle, is empirically impossible because of the huge size of the search space. That’s why the computation of the target space must usually be approximated by some indirect method.
Now, I have no idea of how to do that for Hamlet (while I have ideas for the proteins, and I refer again to Durston’s paper or to the recent Axe’s paper for examples). So, just to go on with our example, let’s pretend that we have approximated our number of functional strings to 2^200000 (which is a big number indeed, and IMO should more than cover your examples of different spelling and similar variations).
That said, with the definitions we gave and the values obtained, the dFSCI of Hamlet is:
target space / search space:
2^200000 / 2^844314
result: 2^ -644314
negative logarithm:
644314 bits
multiplied by the specification factor:
644314 * 1 = 644314 fits
This measure does not change with any variation which keeps the text inside the functional island we defined. It drops to zero as soon as the modified text the target space.
Obviously, it is perfectly possible to define the specification coefficient so that it is not binary. In that case, we could define it as a percent of some reference fucntion, and then mutliply the bits complexity by that coefficietn to get the fits value of dFSCI.
The point is, we can measure dFSCI, and if we are consistent with our definitions, the measure will be objective and useful.
BA77 you neglect the mutagenic effect chemicals have on DNA.
I’ve not neglected it at all but I did point out that it is a relatively minor player in causing harm. There are many levels that a chemical may act to cause3 harm.. From overt cell death to protein adducts to enzyme agonist and antagonist. To support your assertions you will need to demonstrate that chemical damage is equivalent to DNA damage. Reams of toxicological data do not support your assertions in the least.
You also haven’t made your case for a decline in reproductive health….for any species. Nor have you made any case for a “widespread increase in birth defects”. Good luck with that.
acispencer you state:
“You also haven’t made your case for a decline in reproductive health….for any species. Nor have you made any case for a “widespread increase in birth defects”.
typical darwinian disconnect with reality!
That is the one thing that is certain, the question is can you establish your “cause” of purely chemical, and “other”, mitigating, factors as well as I have established the case for the accumulation of mutational load increasing being the cause. You are not even scientifically in the ballpark for making a case for the steady increase of birth defects with your cause, And to top it all off in “disconnect” you are only arguing for “limited” collateral damage from chemicals! This is ludicrous, exactly how in the world is this limited damage scenario going to mitigate the buildup of slightly detrimental mutations I clearly established? Even you neo-Darwinists argue that natural selection (death) is the ultimate culler of these detrimental mutations, thus even you must agree, according to the premises of your own philosophy, that birth defects must increase so as to “appear” so that natural selection can act on them to eliminate them. Or do you deny the studies I cited? If so you have left the bounds of science and it is no point to reason with you since you in fact are not reasoning but clinging to blind faith.
BA77: You are not even scientifically in the ballpark for making a case for the steady increase of birth defects with your cause, And to top it all off in “disconnect” you are only arguing for “limited” collateral damage from chemicals!
You’ve presented no evidence for a steady increase in birth defects. You’ve provided no baseline measures with which to compare rates of defects to determine if they are increasing, decreasing, or at steady-state. If I state defects are a million a year that sounds like a big number but without a denominator it is a meaningless assertion. Much like what you have presented.
I also haven’t made any case for ‘limited collateral” damage from chemicals. I have pointed out that there are numerous other cellular targets that chemicals may interact with and the likelyhood that chemicals will interact with these targets before the chemical ahs a chance to transit a cell’s cytoplasm and enter the nucleus. That chemicals can interact with DNA is not in question what is in question is your erronious assertion that the majority of intracellular chemical interactions occur with DNA. Try a bit of research and see what you come up with for formation of protein adducts.
have fun with the handwaving!
acispencer:
look at this post:
http://www.americanprogress.or.....lette.html
Go down to where it says Declining Reproductive Health part 1, read it slowly,, read it again if need be, then please justify this statement of yours:
“You’ve presented no evidence for a steady increase in birth defects.
then go to this post:
http://www.uncommondescent.com.....tors-note/
read where it says 100,000 detrimental mutations and 6000 genetic disorders (as per John Sanford PhD in genetics) pay attention to the last two peer reviews in which I cite the detrimental mutation rate in humans is above what even evolutionists agree is acceptable, then please explain to me how I have not made my case. You know on second thought don’t even bother I’m tired of the incoherence of your reasoning, and when corrected you will not listen anyway.
You know acispencer, after thinking it over, I’ve realized you do have a somewhat reasonable objection as to the one study not establishing a solid basis as to draw a firm conclusion for declining reproductive health over long periods of time. Thus I’m sorry for saying that you were not reasonable on that point. But as to the other point I still hold that evolutionists have a huge elephant in the living room that they are refusing to deal with the slightly detrimental mutation studies I cited.
Would reproductive success qualify as an objective measure of functionality?
now this may be interesting for you acispencer:
Excerpt: “While the issue is controversial, there are groups of paleontologists who have found evidence suggesting some mass extinctions were gradual, lasting for hundreds of thousands of years,” Kortenkamp said.
http://www.sciencedaily.com/re.....075850.htm
Abrupt and Gradual Extinction Among Late Permian Land Vertebrates in the Karoo Basin, South Africa
Excerpt: the vertebrate fossil data show a gradual extinction in the Upper Permian punctuated by an enhanced extinction pulse at the Permian-Triassic boundary interval,
http://www.sciencemag.org/cgi/.....ct/1107068
“We see a gradual extinction leading to a sharp increase at the P/T boundary. This is followed by a continued extinction after the boundary,” Ward says. The team writes that the pattern “is consistent with a long-term deterioration of the terrestrial ecosystem,”
http://www.geotimes.org/apr05/NN_PTextinction.html
I have more studies, trilobites and such, and quotes from leading paleotologists that all consistently show sudden appearance, fairly rapid diversity and then gradual loss of morphological variability, and then finally, gradual extinction over long periods of time (save for catastrophic extinctions). Even the recent whale study that came out this last week fit this pattern for rapid diversity and long term stability.
Acispencer, This pattern that is being found is consistent with what the Genetic Entropy model predicts.
Petrushka:
Would reproductive success qualify as an objective measure of functionality?
If we are evaluating a single protein, the function must be defined for that protein: for example, the specific enzymatic activity of that protein. We must then define a threshold to measure if the function is considered present or not: that could be a specific value of activity in standard lab conditions. Finally, if we are using the evaluation of dFSCI in the context of a model where the fucntion must be selectable, and therefore visible to NS, we have to show that the new functional protein can confer an increase of reproductive success. But that is an indirect effect of the function. The function must be specific for the protein, and must depend on the specific informational sequence of the protein.
Aleta, Cassandra, Petrushka and others interested in quantification of dFSCI:
I suggest that my post #32 here, together with the final part of the discussion on the Ayala thread, could serve as a starting point for an explicit discussion about this fundamental issue, if you are really available for a pragmatic, and not ideological, confrontation on the subject.
At least, let’s not hear any more that nobody in ID wants to discuss these quantitative aspects: it’s exactly the contrary.
I have no doubt that you wish to discuss the quantification of information or entropy or whatever.
I’m wondering if you can provide a specific example where the unit has been objectively defined and measured.
I also wonder what could possibly be a better candidate than reproductive success as a measure of functional information, or entropy.
I would value clear definitions of all the words used in this context.
Does “information”, for example, refer to teleo-semantic or Shannon information or Kolmogorov complexity or some other entity defined especially for the occasion? They are not all the same thing. Teleo-semantic information clearly requires intelligent agents as both sender and receiver but we can also acquire information from weather, rocks and tree-rings where we have no reason to think there is any intelligent agency involved – apart from the observer.
What is meant by “complexity” and how does it differ from information, given that Kolmogorov complexity is treated as part of information theory?
As for “specified”, does that refer to prior constraints on the range of behavior of which a given system night be capable, and can these restraints only come from a intelligent agent or might they originate in natural properties?
My understanding of “digital” is that it derives from computing where the machines are founded on devices which can occupy one of two discrete states, ‘on’ or ‘off’ represented as ‘0’ or ‘1’. It it being argued that the component or functional structures of the genome are also strictly binary and that they either work or don’t work?
I think there is a misleading tendency, which is not peculiar to ID proponents but is also prevalent in the evolutionist camp, to regard of information as a property or constituent of the genome where I would see it rather as a property of our model of said genome. It is confusing the map for the territory.
Petrushka:
First of all, I am not discussimg entropy here. It’s not a subject I understand well enough to be able to discuss it.
Second, it seems strangely that you have not read my posts in the Ayala thread, where the “the unit has been objectively defined and measured”, and not only by me. No problem, anyway. I am going to paste them here (see next post). But please, read them.
By the way, the unit is functional bits (or, according to Durston, fits).
Seversky:
As you seem new to this specific discussion, I am pasting here the essential from my posts in the Ayala thread, for your convenience. That is just to set the background. Then, in the next post, I will address specifically your questions, which are certainly pertinent.
So, let’s start:
1) For my definition of dFSCI and of its measure, please refer to my post #32 here.
2) Here is an important discussion about the evolution of protein domains (which was an answer to Petrushka):
Petrushka:
“I am not terribly surprised that most genes, most metabolic mechanisms and all body plans, seem to have been invented or discovered by microbes or very simple organisms, presumably having large numbers and short spans between generations.
It is not surprising that few fundamental inventions have been made by larger and slower reproducing creatures.”
My answer:
“Now, I will just give some data, to show that your affirmations are vague and inaccurate. I will refer to the paper “The Evolutionary History of Protein Domains Viewed by
Species Phylogeny” , by Song Yang, Philip E. Bourne, freely available on the internet.
This paper analises the distribution of single protein domains as derived from SCOP (the database of protein families) in the evolutionary tree, and even the distribution of their unique combinations.
The total number of independent domains in the whole proteome is 3464 and the total number of combinations is 116,400.
The first important point is that about half of the domain information was already present at OOL (or at least, at the level of LUCA, if you believe in a pre-LUCA life):
Protein domains 1984;
Combinations 4631.
The mean domain content per protein, at this level, is 2.33
So, the first point you have to explain is how 1984 protein domains were already working at the time of our common ancestor (supposedly 3.5 – 3.8 billion years ago), while not one of them can be found by a random search.
The remaining half of the domains was discovered in the course of evolution, with the following pattern of new domains and domain combinations:
Archea:
Protein domains 31;
Combinations 323;
Bacteria:
Protein domains 467;
Combinations 4537.
Eukaryota:
Protein domains 520;
Combinations 7192.
Fungi:
Protein domains 56;
Combinations 3089.
Metazoa:
Protein domains 209;
Combinations 12304.
So, the next point is: about 1313 new domains arose after LUCA, and of them only 31 and 467 arose in archea and bacteria respectively, while the rest was “discovered” by more complex organisms. So, again, how do you explain the 209 new domains in metazoa?
Another point: in the final parts of evolution, the search for new domains seems to be almost completed: for example, only about ten new domains appear in mammalia. On the contrary, the number of new combinations and the average complexity and length of the single protein definitely increase.
So, to sum up:
1) More than half of the information for the proteome is already present at the stage of LUCA (or, if you want, at OOL, unless you can explain how such a complex LUCA originated in a relatively short time from inorganic matter)
2) The remaining new information was “discovered” during evolution, but certainly not only at the time of bacteria: at least half of the new domains appear in organisms more complex than bacteria, and about one quarter appear in metazoa.
So, if it is true that the search for new domains “slows down” after OOL, and almost stops in the last stages, the successful search for new domains in the ocean of the search space definitely goes on for the whole evolutionary span.
3) While the search for new domains slows down, the search for new combinations in more complex protein increases along the evolutionary tree. In other words, as the repertoire of elementary folds is almost completed (demonstrating that targets not only exist, but can be fully achieved), the search for function is moved to a higher logical level.
4) Finally, we must remember that this analysis is only acoomplished at the level of protein genes (1.5% of the genome in humans). The non coding part of the genome constantly increases during evolution, and most of us (and, today, I would say most of the biologists) are convinced that non coding DNA is one of the keys to understanding genome regulation, and therefore body plans and many oyther things.
Another, more complex level of abstraction and regulatory function which darwinists will have to explain after they have at least tried to explain the previous, simpler levels.”
3) This was an answer to a specific question from BA about the papaer quoted previously:
This is the method used in the paper I quoted:
“Mapping of domains and domain combinations to species trees is
too time-consuming to do manually. Our approach (see methods), similar to the approach introduced by Snel et al. [30], aims to predict the presence or absence of protein domains in ancestor organisms based on their distribution in present day organisms. Four evolutionary processes govern the presence or absence of a domain at each node in the tree: vertical inheritance, domain loss, horizontal gene transfer (HGT) and domain genesis. (Domain duplication and recombination do not affect domain presence.) Each process is assigned an empirical score according to their estimated relative probability of occurring during evolution, and the minimum overall score depicts the most parsimonious evolutionary
processes of each domain or combination (see methods)”.
As you can see, it is based on empirical evidence, and not on functional reasoning.
Obviously, it is only an estimate, and different approaches could give different numbers. The authors are well aware of that:
“Table 1 lists the predicted number of domains and domain combinations originated in the major lineages of the tree of life. 1984 domains (at the family level) are predicted to be in the root of the tree (with the ratio Rhgt = 12), accounting for more than half of the total domains (3464 families in SCOP 1.73). This prediction is significantly higher than what is generally believed [5,31,32].
There are several reasons to account for the discrepancy. First, previous attempts focused on universal and ubiquitous proteins (or domains) in LUCA [5], so one protein has to exist in the majority of species in each of the three superkingdoms (usually 70%–90%) to be considered as LUCA protein [32]. Second, the root of the tree is still not solved. Thus any domains that are shared by two superkingdoms are counted as originating in the LUCA.
Endosymbiosis of mitochondria and chloroplasts and horizontal gene transfer across superkingdoms can result in the same effect, which is moving the origin of protein domains towards the root. Third is our limited knowledge of protein domains. On average nearly 40% of predicted ORFs in the genomes under study cannot be assigned to any known domain. When assigned in the future they may turn out to be species or lineage specific domains that emerged relatively late on the tree of life. There are also a
significant number of domains which emerge at the root of bacteria and eukaryotes. Likewise, this can be explained by the unresolved early evolution at the origin of bacteria and eukaryotes.”
So, we are not taking these numbers as absolute, but it is perfectly reasonable that the general scenario will be something like that, even if the numbers can change.
The conclusions of the authors appear reasonable:
“Notwithstanding, these data suggest that a large proportion of
protein domains were invented in the root or after the separation
of the three major superkingdoms but before the further differentiation of each lineage. When tracing outward along the tree from the root, the number of novel domains invented at each node decreases (Figure 4A). Many branches, and hence species, apparently do not invent any domains. As previously discussed, this might be a result of the incomplete knowledge of lineage
specific domains.”
A functional approach is certainly possible too. That impllies having a model of the simplest living cell, and trying to estimate the number of necessary proteins and of necessary domains. The approach, anyway, is more conceptual, and not necessarily connected to evidence. Moreover, the definition of simplest living cell can vary, and a strictly reductionist approach, a la Venter, is certainly cutting down many of the naturally occurring functions. Therefore, I think that the empiric approach based on the vurrent distribution of damains and sequences is preferable, more scientific, and, I would say, perfectly “darwinian” (so that, for once, we could agree with our adversaries at least about one methodology ).
Two facts cannot be questioned:
1) A lot of the protein domains were “discovered” at the root of the evolutionary tree. So darwinists must not only find a vaguely credible theory for OOL, but also one which is extremely efficient in respect to time, much more efficient than all later darwinian evolution, in order to explain how approximately half of the basic protein information was avalable after, say, 200 – 300 My from the start (whatever the “start” was).
2) Basic protein domain information is only the start, and definitely not the biggest part of the functional information to be explained. Then you have:
a) The space of different protein functions in the context of a same domain (let’s remember that the same fold can have many different functions, and different active sites).
b) The space of multidomain complex proteins, which implies a search in the combinatorial space of all domains.
c) The fundamental space of protein regulation, maybe the biggest of all, which certainly implies at leat gene sequence, non coding DNA and epigenetic mechanisms.
d) The space of multicellular integration.
e) The space of body plans, system plans, organ plans, tissue plans, and so on.
f) The space of complex integration to environment and higher cognitive functions (immune system, nervous system).
That’s only a brief and gross summary. Each of these levels poses insurmountable impossibilities to the model of darwinian evolution. Unfortunately, most of these levels cannot yet be treated quantitatively for two reasons:
1) They are too complex
2) We know too little about them
So, for the moment, let’s wait for answers about the first level, protein domain information, which is much easier to analyze.
But I am not holding my breath.
I hope the citation tags work as they should (it’s a problem, in pasting previous posts). Anyway, the rest in next post.
Seversky:
This is my main post from the Ayala thread. I paste it here, even if the final part was already pasted in #32, because it includes an important premise which is specially relevant for your questions. I apologize for the partial repetition:
Now, IMO what BA is saying here is: “you have no algorithmic mechanism to generate fucntionally specified complex information”, because FSCI can be generated only by a conscious agent.
That’s exactly the point of ID. So, please let’s go back to the classical concept of FSCI, or if you prefer (I definitely do) to its subset of digital functionally specified complex information (from now on: dFSCI).
I have recently posted about that in another thread, debating alsio its measure (so please, those who say that we never go quantitative about that, please read more carefully).
I paste here some pertinent comments I posted elsewhere:
“2) Consciousness and ID.
I did not realize for a long time the importance given in ID to “consciousness”. Its hard to fathom how you believe that some process has to experience its environment the way people do (what else could “consciousness” mean) in order for it to create complex specified output. Even bodily organs do incredibly complex things, without having to sense or understand the world the way that you or I do.
Of course consciousness in central in ID theory. ID is about detecting design in things. Design is a process which originates in conscious intelligent beings (the designers). ID affirms that designed objects are recognizable with certainty as such if they exhibit a specific property, CSI. CSI is the main idea in ID. It is objectively recognizable, and in the known world it is always the product of design by an intelligent conscious being (leaving apart biological information, which is the object of the discussion).
A special subset of CSI, digital functionally specified complex information (or, if you want, dFSCI), is specially useful for the discussion. It is easily definable as any string of digital information with the following properties: complexity higher than 10^150 (that is, length of about 500 bits); non significant compressibility (it cannot be generated through laws of necessity from a simpler string); and a recognizable, objectively definable function.
That definition is very strong and useful. According to that definition, dFSCI includes language, software and practically all relevant biological information (in particular, the sequences of protein coding genes and the primary sequences of proteins).
It is easy to show that no example is known of dFSCI (apart from biological information, which is the object of the debate) whic does not originate from a cosncious intelligent being (humans). And our common experience is that consciousness and intelligence are exactly the faculties used by humans in producing dFSCI.
Biological information is dFSCI (any functional protein is). That’s why ID, with very sound inference based on analogy, assumes that some conscious and intelligent designer is the origin of biological information.
That is, very quickly, the main idea in ID. Neo-darwinism cannot explain the emergence of dFSCI in living beings. The work of a designer can.
I would like to mention that dFSCI originates from conscious intelligent beings directly; ot indirectly, through some non conscious machine which has received from an intelligent conscious being the pertinent dFSCI. In other words, Hamlet is dFSCI. Hamlet can be outputted by a PC, but only if someone has inputted it in the software. No computing machine can create Hamlet (or anything equivalent).
Specification, function and purpose are definable only in relation to consciousness. Only consciousness recognizes them actively. So, consciousness is central to ID. Without consciousness, no function can be recognized. With consciousness, function can be defined, recognized and measured. And function is the only relevant form of specification in biological information.
To go to your examples, bodily organs do not output dFSCI, even if they do complex things. A mchine can do complex things according to the CSI which has been inputted in the machine, but it cannot generate new dFSCI. The human body as a whole can generate new dFSCI (speaking, writing, programming) only because it is an interface for a conscious intelligent being.
3) Types of digital information.
But complex meaningful sequences will not be found in monotonic strings, only in the amount of variation provided by randomness.
We have three types of digital information:
a) highly compressible strings, like monotonic strings. These are not dFSCI.
b) truly random strings (high complexity, no functional specification). These are not dFSCI.
c) pseudo-random strings, where a recognizable meaning is superimposed to the random structure by an intelligent designer (Hamlet, any software, any long discourse). And, obviouisly, any functional protein. These are dFSCI.
About that, I would suggest that you read the following paper:
Three subsets of sequence complexity and their relevance to biopolymeric information
by David L Abel and Jack T Trevors
available at the following URL:
http://www.ncbi.nlm.nih.gov/pm…..MC1208958/”
And, about its quantitative measure, another post of mine from another thread:
“As this is a fundamental issue, I will try to be more clear.
There is a general concept of CSI, which refers to any information which is complex enough (in the usual sense) and specified.
Now, while I think that we can all agree on the concept of complexity, some problems appear as soon as we try to define specification.
There is no doubt that specification can come in many forms: you can have compressibility, pre-specification, functional specification, and probably others. And, in a sense, any true specification, coupled to high complexity, is a mark of design, as Dembski’s work correctly affirms. But the problem is, some kinds of specifications are more difficult to define universally, and in some of them the complexity is more difficult to evaluate.
Let’s take compressibility, for instance. In a sense, true compressibility which cannot be explained in any other way is a mark of design. Take a string of 1000 letters, all of which are “a”. You can explan it in two different ways:
1) It is produced by a system which can only output the letter “a”:in other words, it is the product of necessity. No CSI here.
2) It is the output of a truly random system which can output any letter with the same probability, but the intervention of a conscious agent has “forced” an output which would be extremely rare and which is readily recognizable to consciousness. The string is designed to be highly compressible.
In any case, you can see that using the nconcept of compressibility as a sign of specification is not without meaning, but creates many interpretational problems.
Or, take the example of analog specified information, like the classic Mount Rushmore example. The specification is very intuitive, but you have two problems:
1) The boundary between true form and vague resemblance is difficult to quantify in analog realities.
2) It is difficult to quantitavely evaluate the complexity of an analog information.
For all these reasons, I have chosen to debate only a very specific subset of CSI, where all these difficulties are easily overcome. That subset is dFSCI. A few comments about this particular type of CSI:
1) The specification has to be functional. In other words, the information is specified because it conveys the intructions for a specific function, one which can be recognized and defined and objectively measured as present or absent, if necessary using a quantitative threshold. It is interesting to onserve that the concept of functional specification is earlier than Dembski’s work.
2) The information must be digital. Tha avoids all the problems with analo information, and allows an easy quantification of the search space and of the complexity.
3) The information must not be significantly compressible: in other words, it cannot be the output of an algorithm based on the laws of necessity.
4) If we want to be even more restrictive, I would say that the information must be symbolic. In other words, it has to be interpreted through a conventional code to convey its meaning.
Now, in defining such a restricted subset of CSI, I am not doing anything arbitrary. I am only willfully restricting the discussion to a subset of objects which can be more easily analyzed. The discussion will be about these objects only, and any conclusion will be about these objects only. So, if we establish that objects exhibiting dFSCI are designed, I will not try to generalize that conclusion to any other type of CSI. Objects exhibiting analog specified information or compressible information can certainly be equally designed, but that’s not my problem, and others can discuss that.
And do you know why it’s not my problem? Because my definition of that specific subset of CSI includes anything which interests me (and, I believe, all those who come to this blog). It includes all biological information in the genomes, and all linguistic information, and all software.
That’s more than enough, for me, to go on in the discussion about ID.
So, to answer explicitly your questions:
1) The presence of CSI is a mark of design certainly under the definition I have given here (dFSCI), and possibly under different definitions. I am not trying here to diminish in any way the importance of other definitions, indeed I do believe them to be perfectly valid, but here I will take care only of mine.
2) I have no doubt that, under my definition, there is no example known of CSI which is not either designed by humans or biological information. Nobody has ever been able to provide a single example which can falsify that statement. And yet even one example would do.
3) CSI in the sense I have given is certainly an objective measure. The measure only requires:
a) an objective definition of a function, and an objective way to ascertain it. For an enzyme, that will be a clear definition of the enzymatic activity in standard conditions, and a threshold for that activity. The specification value will be binary (1 if present, 0 if not).
b) A computation of the minimal search space (for a protein of length n, that would be at least 20^n).
c) A computation, or at least a reasonable approximation, of the number of specific functional sequences: in other words, the number of different protein sequences of maximum length n which exhibit the function under the above definitions.
The negative logarithm of (c/b) * a will be the measure of the specified complexity. It should be higher than a conventional threshold (a universal threshold of 10^150 is fine, but a biological threshold can certainly be much lower).
For a real, published computation of CSI in proteins in the above sense with a very reasonable method, please see:
Measuring the functional sequence complexity of proteins.
by Durston KK, Chiu DK, Abel DL, Trevors JT
Theor Biol Med Model. 2007 Dec 6;4:47.
freely available online at:
http://www.ncbi.nlm.nih.gov/pm…..ool=pubmed”
Finally, for those who ask about units, it should be obvious that the complexity is measure in a way which is similar to the way we measure Shannon’s entropy, in bits, with the difference that specification must be present (must have value 1), otherwise there is no functional complexity.
Seversky (#43):
Now, let’s go to your specific questions:
I would value clear definitions of all the words used in this context.
Does “information”, for example, refer to teleo-semantic or Shannon information or Kolmogorov complexity or some other entity defined especially for the occasion? They are not all the same thing. Teleo-semantic information clearly requires intelligent agents as both sender and receiver but we can also acquire information from weather, rocks and tree-rings where we have no reason to think there is any intelligent agency involved – apart from the observer.
As should be clear from my previous posts, the information debated in ID is CSI, and the subset of CSI defined and debated by me in all the previous contexts is dFSCI. You can find an explicit definition above. We can debate any pint which is not clear to you about that.
It is in a sense a classical measure of complexity, in the same sense as Shannon information, and it is expressed in the same unit (bits). Shannon information is used more explicitly in the procedure elucidated by Durston in his fundamental paper, and the variation in Shannon information is used there to indirectly measure the functional complexity in sets of natural proteins with the same function. I refer you to that paper for that. Again, we can discuss these points in detail if you want.
The requirement that dFSCI be “scarcely compressible” is necessary to exclude strings which can be the output of necessary algorithm, as debated in my previous posts. Protein sequences definitely satisfy that requirement, as you can see from the following non ID paper:
http://www.bioinf.unisi.it/mat.....erzel1.pdf
Therefore, if strings which exhibit dFSCI have to be scarcely compressible by definition, their general complexity is approximately the same as their Kolmogorov complexity.
(just a note here: I am not a mathematician, so if I make some formal errors in specific issues, I will be happy to be corrected).
Regarding the information in simple data, like weather data, I agree that it is information, but it is not functional information: simple data in themselves do not convey any explicit function coded in their sequence which can be recognized ny a conscious observer. I have debated this point in some detail with KF on another thread. If you are interested, I will paste the link.
What is meant by “complexity” and how does it differ from information, given that Kolmogorov complexity is treated as part of information theory?
Complexity is a classical measure of information in bits. Functional complexity is the measure of information which derives from the ratio of the target space and the search space, in scarcely compressible strings, and is multiplied by a “specification coefficient” which can be binary (function present or absent), or can be a number from 0 to 1, expressing a quantitative assessment of the function.
The definition of the function and its measure (both categoric or quantitative) depend on a conscious observer, but are made explicitly.
I have already answered about Kolmogorov complexity.
As for “specified”, does that refer to prior constraints on the range of behavior of which a given system night be capable, and can these restraints only come from a intelligent agent or might they originate in natural properties?
Specified refers to the satisfaction by the system of an explicit function defined by a conscious observer, and measured according to an explicit procedure. It is, certainly, a “constraint on the range of behavior of which a given system might be capable”. It is not necessarily “prior”: the function is observed and recognized in the functional system (for instance, the protein), and explicitly defined. The, it becomes a prior constraint for the evaluation of the functional (target) space.
And yes, the function can “only come from a intelligent agent”, in the sense that only an intelligent agent can define a function, because the concept of function implies purpose, and purpose is a conscious representation. Obviously, narural properties (in the sense of non designed systems) can exhibit “apparent” function: an intelligent agent could recognize a function in a system which was not designed by an intelligent agent (that would be a “false positive” specification). But those false positive are never complex beyond some definite threshold of complexity (let’s say, for the moment, Dembski’s UPB). All information which is apparently specified and complex is designed by an intelligent agent (leaving out biological information, which is the object of our discussion, and whose status is what has to be decided).
My understanding of “digital” is that it derives from computing where the machines are founded on devices which can occupy one of two discrete states, ‘on’ or ‘off’ represented as ‘0? or ‘1?. It it being argued that the component or functional structures of the genome are also strictly binary and that they either work or don’t work?
No, “digital” refers to the fact that the information in the string is written according to a numeric code, and is read according to it. Probably, it could be defined “digital and symbolic”. I am not here to fight about words. The concept is simple. I am referring in particular to the information in protein coding genes. It is digital, because it is a sequence of 4 “letters” (the nucleotides), read in “words” of 3 nucleotides, according to a specific redundant code which is contained in a specific set of enzymes (Aminoacyl tRNA synthetases). The information about protein sequence is symbolically coded in the gene sequence. It could certainly be expressed in classical binary code, through a simple conversion. The fact that the DNA information uses a quaternary code does not change anything in itself.
I am using “digital” to exclude form the discussion analogic information, which requires an analogic-digital conversion to be expressed in digital form. I am doing that not because analogic information cannot be CSI (it definitely can), but because digital information is easier to discuss, and because all information in protein genes is digital.
I think there is a misleading tendency, which is not peculiar to ID proponents but is also prevalent in the evolutionist camp, to regard of information as a property or constituent of the genome where I would see it rather as a property of our model of said genome. It is confusing the map for the territory.
Separating the map from the territory is one of my favourite principles (I am a partial fan of NLP). So, I really don’t want to confuse them. And I don’t think I have.
You can see that I have always explicitly separated the points where the intervention of a conscious intelligent agent is requested, form the points which can be defined in an entirely “objective” way. One of my strong points in the above discussion is that no function can be defines in a purely “objective” way. Function is always defined by intelligent agents.
But the other important point is that function can be implemented by intelligent agents in objective systems, and can be recognized by intelligent agents in objective systems. In that case, there is the inference of purpose made by an intelligent observer about an intelligent agent who designed the system.
Such an inference can be right or wrong. If it is wrong, we have recognized a “pseudo-function”: the sytem was not designed, no purpose was implemented in it by any intelligent designer, and we have a false positive. If it is right, we have correctly recognized a conscious process through its intelligently designed output.
Now, the whole point of ID is: complexity allows us to distinguish between true function and pseudo function: no pseudo-function is ever complex, not beyond some appropriate conventional threshold. Complexity eliminates the problem of false positives. (False negatives cannot be eliminated: simple systems can be designed, but not recognizable as such with certainty).
So, functional information is always “a property of our models”. That’s the point. Our model is a way to recognize the model used by another intelligent agent in designing the objective system. Complexity excludes the possibility of error due to random occurrence of pseudo-function.
In that sense, design recognition is a successful communication of meaning between two intelligent agents. The map is not the territory, but maps can definitely be shared.
Lots of words, but no example of a measurement. I disagree that an objective unit of measurement has been established.
You haven’t even mentioned the most obvious complicating fact: that the value or fitness of any allele can change over time. It can change because the selecting environment changes, or it can change because of interactions with other changing alleles.
Any theory of biological information must take this into account. there is no fixed target. the “correct” spelling of words shifts. The correct spelling is determined by whether an individual reproduces.
I respectfully disagree with the assertion that “digital” implies binary. We opted for binary coding in computers because it was easy to implement in relays, vacuum tubes and transistors.
The problem continues to be that ID proponents continue to base fundamental conclusions on data which doesn’t exist.
Until you can point to a specific genome and say it contains x bits of information, and that the genome of another individual contains x+y or x-y bits of information, the claims regarding creation of information are empty.
I can’t think of any instance in science where conclusions about quantities preceded the measurement of those quantities.
Petrshka (#48):
You are really beyond any hope. Is that your idea of a discussion?
Why an unit of measure has not been established? You go on saying that, and I go on saying that the unit if fucntional bits, or fits. Bit, exactly as in the measurement of Shannon’s H. Do you understand that? Functional, because you meause the information in fucntional strings. Is that clear? If not, please say why. And please, read the Durston paper, where the fits are rigorously defined.
And what has the value of fitness of alleles to do with the measurement of the FSCI of a protein? I have already answered that, but I believe it’s useless. The function of a protein is a biochemical property, it has nothing to do with fitness, or with fitness landscapes. The enzymatic activity of a protein can be easily measured in a lab, in absolute units. It requires no assumption about fitness landscapes, allele changes or anything else. Just the measurement of an objective biochemical property. And you can find that property listed in all protein databases for many known pproteins, in the field “function”. Is that clear? If not, please state why.
I have already answere in great detail your “no fixed target” argument on another thread, without any comment from you. I will not do it again here.
I have never stated that “difital” menas “binary”. That was Seversky’s statement, if I am not wrong. I have only said that “digital” means “coded in numbers”, and that the quaternaryt alphabet of DNA can be easily translated into binary, if for any reason we want to do that.
Petrushka (#49):
Again, read Durston’s paper. He has measure the functional information in about twenty protein families. But please, read it!
And, just a bit of information for you: we do know the sequences of a lot of proteins in the proteome. So, a lot of calculations can be made, if darwinists will ever care to offer some real scenario, even if only hypothetical, of molecular evolution.
Now, please, add some other short post which has nothing to do with the tons of detailed arguments which I have offered.
I wish you the best.
gpuccio @ 47,
Applying a complexity measurement to a pattern from simply a mathematical point of view, is almost impossible without knowing a lot about the intended use of that pattern.
That’s why Petrushka says @ 48,
In other words, the FS component of your dFSCI, has changed.
As an example of trying to compute your dFSCI without an evironment, which of the following has more dFSCI?
1) 010011010010
OR
2) 111111111111
You linked me to this:
So according to the paper you linked an asked me to read, functional proteins have, on average, one percent less information than non-functional proteins. Furthermore, “little deviation from randomness in the sequence is needed for a protein to recognize a specific receptor surface.”
So I am going to ask again, of what use is a theory of biological information or biological entropy that does not take into account the effects of change on viability and reproductive success?
It seems to me that the study you linked pretty much obliterates the notion that every bit of a protein producing gene needs to be specified.
Maybe one percent, or maybe less, since the article says non-randomness is not required for functionality.
I suspect this has some consequences for any computation of probability.
It might seem paradoxical that a functional sequence would be less complex than a random one, but the information theory you endorse seems to assign a maximum quantity to random strings.
I’m not sure the Durston paper means what you think it means. It seems to be a proposal rather than a finished product.
Furthermore, it’s focus in on the evolution of proteins. It doesn’t suggest that such evolution is impossible:
http://www.tbiomed.com/content/4/1/47
From the references you provided, and which, I assume you endorse as accurate, I gather the following:
1. Protein sequences appear to be lightly edited random sequences.
2. Protein function can change gradually from non-functional through varying degrees of functionality. (It can also be dramatically affected, for better or for worse, by a single point mutation.
3. Measures of information must include fitness. There are many possible ways of defining fitness, but the only one relevant of evolution is reproductive fitness. Changes in proteins that affect things like hair color are relevant only if they increase or decrease reproductive success.
gpuccio, neither of the links in message 48 work. They both have elipses inserted into them.
Petrushka (#52 – 56):
That’s better.Now you have made specific statements, and we can discuss.
I will try to be as clear as possible, but the subject requires some patient attention.
1) Random sequences (what Durston calls “the null state”).
A truly random sequence of AAs, of length n, has a maximum value of Shannon’s H of 4.32 bits per site. This is easily calculated as follows:
Let’s pretend that n = 100
The total complexity of the string will be 20^100, that is about 10^130, that is about 2^432. So, the total complexity of the string is 432 bits, and the complexity per site is 4.32 bits.
That value, as you certainly know, is often called information, but is in reality a measure of uncertainty, and has nothing to do with meaning or function. Shannon’s theory is not a theory of meaning.
It is perfectly normal that the maximum value of H is obtained in purely random strings, where the uncertainty is maximum. Another way to say that is that in a purely random string compressibility is extremely low and Kolmogorov complexity is similar to total complexity. But purely random sequences convey no meaning or function.
2) The paper by Weiss et al.
In this paper, the authors try in various ways to evaluate the uncertainty (H) in a specific set of proteins, and also to evaluate the compressibility of the same set. They find that the value of H per site in their set is 4.19 bits per site, only slightly lower than the maximum value, and that compressibility in the same set is very low.
But the key point to understand the meaning of their findings is to see what is the set of proteins which they evaluate. We can find that in the paper:
“As data sets, we use a set of protein sequences with one protein of each superfamily. This superfamily set was introduced by White (1994).”
In other words, they are taking one protein from each superfamily, and they are evaluating H in the whole set. At the same time, they are evaluating if the whole set is compressible.
So, what do their findings mean? As their data set is composed of protein form different superfamilies, it can be considered as a sample of the whole genome. The low compressibility and the minimal reduction in uncertainty mean two important things:
a) The protein sequences are pseudo-random: there is no intrinsic feature in them which allows recognition form true random sequences, except for the function of the protein itself. In other words, functional protein sequences cannot be generated algorithmically, and are not significantly compressible.
b) Proteins from different superfamilies are totally unrelated one from the other: there is no recurrent similarity between them which can significantly reduce the uncertainty. The H per site in a trasversal sample of the proteome is almost as big as the maximum H of true random sequences. That is very important, because it is evidence that protein superfamilies are isolated islands of functionality in the ocean of possible sequences.
3) The paper by Durston et al.
What is the difference then in the procedure used by Durston? It’s simple. Durston applies the calculation of uncertainty (H) to sets of homologue proteins in different species. That’s a completely different scenario. Here each set is formed by sequences in different species which have the same folding and the same function, but may differ in primary sequence in some measure.
Let’s take, for instance, form table 1 of the paper, the case of ribosomal S12, a rather small protein of 121 AAs. The authors have evaluated 603 sequences of that protein from different species. What they do is the following:
a) They calculate the complexity of the null state (a purely random sequence of 121 AAs), which will be: 20^121, that is 2^522. Total complexity: 522 bits; H per site 4.32.
b) They calculate (and this is the truly smart idea) H for the whole set of 603 sequences, assigning values for each site which depend critically on how much the site is conserved in the set. The two extreme situatuations are: a site ultraconserved in all sequences corresponds to H 0 bits (no uncertainty); a site which varies in a completely random way in the set corresponds to the maximum H (4.32 bits). Obviously, each aminoacid site can have any intermediated value.
c) For each site, they calculate a-b. This is the reduction of uncertainty for that site due to the fact that sequences are part of a functionally constrained subset. This value, expressed in Fits (functional bits), is a measure of how much that site is “constrained” for that functional sequence: a site ultraconserved will have the maximum Fit value (4.32 – 0 = 4.32 Fits), and therefore the maximum quan tity of specified information. A site where aminoacids can vary randomly will have the minimum Fit value (4.32 – 4.32 = 0), and will contribute nothing to the total specified information.
d) Finally, the Fit values of each site are summed, and that gives the total Fit value for that family of proteins. The average Fit value per site is also calculated.
For instance, for the protein family ribosomal S12, while the total complexity of the random state was 522 bits, the Fit value is 359 bits (522 – 163), and the average Fit value per site is 3.0 bits. Another way of expressing that is that the size of the search space is 2^522 (10^157); the size of the target (functional) space is 2^163 (10^49); and the ratio of the two (the specified complexity) is 2^359 (10^108). In other words, the probability of finding a functional sequence of this group by a single random event is of the order of 10^-108.
All that can be found in the paper, but what does that mean? It means that the island of functionality for this specific function is one part to 10^108 in the search space. Quite a remarkable result.
Before you object again that proteins do not form altogether randomly, I will refer again to the result in 2): each protein superfamily is a separated island of functionality. To pass form one to another by random variation, one has to traverse the ocean of non functional sequences.
And anyway, if you read carefully the Durston paper, you will see that the method can be applied also to the calculation of the change in functional specified complexity with sequence variations, and can therefore be applied quantitatively to specified transition scenarios, for instance from one function to another, if and when proposed by those who believe in them (the darwinists).
So, I think you have completely missed the point of all this reasoning. I hope these further notes may help.
And by the way, what do you mean with that phrase about the Durston paper? “It seems to be a proposal rather than a finished product.” What does that mean? In empirical science, everything is a proposal, and nothing is a finished product, whatever that may mean. Unless you are one of those people who believe that theories can become facts…
warehuff:
I suppose you refer to what is at present post 46. The link for the Abel and Trevors paper is the following:
http://www.ncbi.nlm.nih.gov/pm.....MC1208958/
and here is the link to the Durston paper:
http://www.tbiomed.com/content/4/1/47
I apologize, the problem is that I had pasted the post from another thread. I hope now the links work.
It means simply that there is no consensus to use this methodology. In contrast to the consensus for measurements of Ohms, Amps and such in electrical engineering, or measurements of entropy in physics.
As for who has missed the authors intended meaning, I will only note that these papers do not Mention seas of non functionality, nor do they suggest any problems for traditional stepwise evolution.
If you find anything like that in the papers, feel free to quote it.
Toronto (#52):
Applying a complexity measurement to a pattern from simply a mathematical point of view, is almost impossible without knowing a lot about the intended use of that pattern.
What’s your problem? Perhaps you have not read my definition (point 1):
1) The specification has to be functional. In other words, the information is specified because it conveys the intructions for a specific function, one which can be recognized and defined and objectively measured as present or absent, if necessary using a quantitative threshold.
That’s a lot of context, and not “simply a mathematical point of view”.
What do you mean about the “intended use of that pattern”? The first requirement to speak oof dFSCI is that a conscious observer recognize a function and explicitly define it. For proteins, it is necessary that the protein is recognized has having a function in a context (which, as I have already said, can easily be found in protein databases at the field “fucntion”). An enzyme, for instance, is a biochemical catalyst of a specific, measurable biochemical reaction in the context of the cell. Again, what is your problem?
That’s why Petrushka says @ 48,
[snip..]that the value or fitness of any allele can change over time.[..snip]
[..snip]Any theory of biological information must take this into account. there is no fixed target.[..snip]
In other words, the FS component of your dFSCI, has changed.
Again, what has that to do with the biochemical fucntion of a protein? That does not change.
Moreover, my definition of dFSCI is given for an explicit function, not for “any possible fucntion”. I will not go into this discussion now, it’s too late, but I paste here a brief response I gave to Petrushka on this subject on the Ayala thread:
Petrushka:
“If one has a specific target, such as the works of Shakespeare, odds are that random variation and selection will not produce it”
My response:
“Nor will they produce the proteome. Again, the argument of “evolution can take any direction” is a false argument. Once you have a basic structure for life (DNA, proteins, cells, etc.), directions are extremely narrow. You need proteins which fold and have active sites, and those active sites must do something useful in the context where they are supposed to arise, and must interact with what already exists, and the new proteins must be regulated in the correct way, and so on. Targets, targets everywhere!”
Finally, your two sequences. These are old, useless tricks. I have stated explicitly that one can discuss the presence of dFSCI only if a fucntion is recognized. A function can be in a string, but the observer can not recognize it. That will be a false negative. We have always stated explicitly that the search for dFSCI has potentially a lot of false negatives. There are two main causes: the specification can be present, but the complexity can be low (“simple designs”); or the specification is there but is not recognized (“unrecognized specification”).
And so? If you give me an AA sequence of say 150 AAs, and ask me if it is functionally specified, I have no idea how to answer you. But if I know that the sequence corresponds ti the primary structure of a known functional protein, then I can recognize the specification, define it explicitly (a protein which in the lab in the right context can do such and such) and even fix a quantitatibe test to assess the function.
So, again, what is your problem? I thought I had alkready said all that in my previous posts. So, why do you come here with your tricks?
But, to answer you just the same, what I see in your two sequences is only the following:
a) two short binary sequences (total complexity 12 bits for each). So, no discussion about possible dFSCI here.
b) the second sequence is obviously potentially compressible, so it would never qualify as dFSCI accroding to my definition even if it were longer.
c) I recognize nothing in the first sequence, but that could just be my ignorance of mathemathics. Now you could say: but it’s the first figures of pi, or something else… And so? I just don’t know. Again, and so?
What has all that to do with “patterns” and “intended use”? A protein sequence has no special pattern: it is pseudo-random. This is one of the points I have made ad nauseam in my previous posts, That’s why I could never recognize it just by looking at it. And there is no problem of “intended use”: just what the protein can do, and indeed does, in the cellular context.
Petrushka (#61):
Well, you are back to “no arguments”.
It means simply that there is no consensus to use this methodology. In contrast to the consensus for measurements of Ohms, Amps and such in electrical engineering, or measurements of entropy in physics.
This is really funny. First of all, why do you compare a methodology regarding biology to measurements in the hard sciences? there are big differences, as you should know.
But the really funny thing is that you are asking for a “consensus” about a methodology proposed in an ID friendly paper! Funny indeed.
As for who has missed the authors intended meaning, I will only note that these papers do not Mention seas of non functionality, nor do they suggest any problems for traditional stepwise evolution.
If you find anything like that in the papers, feel free to quote it.
First of all, what interests me in a paper are its facts and conclusions, not the political opinion of the authors. I am free to quote the facts and methodology of a paper even if the authors get to different conclusions from mine.
That said, fortunately that’s not the case here. Durston has come here at UD in the past exactly to explain the relevance to ID of his methodology (and had to go away befor completing his work for “unknown reasons”). There was also a video posted here where he explained what his work meant.
And if you want to re4ad words like “seas of non functionality” or some equivalent concept, please go to the site of the new peer reviewed journal “bio-complexity” and read the review by Axe about the problem of the emergence of protein domains. You will find there practically all that I have said, and more. The title? “The Case Against a Darwinian Origin of Protein Folds”.
And guess who posted a comment about the Axe paper? David L. Abel, in person. And the comment? “Excellent paper.”
Regarding superfamilies and difficult transitions: has any biologist proposed that such transitions have taken place?
It seems to me that the papers argue that the transition from random sequence to functional sequence can take place seamlessly. Perhaps even the random sequence has functionality.
Almost any string of alphabetical letters will contain functional substrings — letters, letter pairs, even triplets found in words. The papers you linked argue that something like that is true of proteins.
Apparently in the ID version of biology you can compute probabilities using hypothetical data.
I know that sounds snide, but based on the papers you asked me to read, ID proponents have for some time been unjustifiably calculating probabilities on the assumption that every bit in the genome specifying a protein was critical.
I will refrain from returning the favor. I don’t know what the probabilities are. I’ve seen nothing, however, that conflicts with the usefulness of a protein being a gradient rather than a step function.
Petruska you state:
“I’ve seen nothing, however, that conflicts with the usefulness of a protein being a gradient rather than a step function.”
And the evidence says,,,
The Case Against a Darwinian Origin of Protein Folds – Douglas Axe, Jay Richards – audio
http://intelligentdesign.podom.....9_03-07_00
Minimal Complexity Relegates Life Origin Models To Fanciful Speculation – Nov. 2009
Excerpt: Based on the structural requirements of enzyme activity Axe emphatically argued against a global-ascent model of the function landscape in which incremental improvements of an arbitrary starting sequence “lead to a globally optimal final sequence with reasonably high probability”. For a protein made from scratch in a prebiotic soup, the odds of finding such globally optimal solutions are infinitesimally small- somewhere between 1 in 10exp140 and 1 in 10exp164 for a 150 amino acid long sequence if we factor in the probabilities of forming peptide bonds and of incorporating only left handed amino acids.
http://www.arn.org/blogs/index.....ife_origin
The Case Against a Darwinian Origin of Protein Folds – Douglas Axe – 2010
Excerpt Pg. 11: “Based on analysis of the genomes of 447 bacterial species, the projected number of different domain structures per species averages 991. Comparing this to the number of pathways by which metabolic processes are carried out, which is around 263 for E. coli, provides a rough figure of three or four new domain folds being needed, on average, for every new metabolic pathway. In order to accomplish this successfully, an evolutionary search would need to be capable of locating sequences that amount to anything from one in 10^159 to one in 10^308 possibilities, something the neo-Darwinian model falls short of by a very wide margin.”
http://bio-complexity.org/ojs/.....O-C.2010.1
Evolution vs. Functional Proteins (Mount Improbable) – Doug Axe – Video
http://www.metacafe.com/watch/4018222
Dollo’s law, the symmetry of time, and the edge of evolution – Michael Behe – Oct 2009
Excerpt: Nature has recently published an interesting paper which places severe limits on Darwinian evolution.,,,
A time-symmetric Dollo’s law turns the notion of “pre-adaptation” on its head. The law instead predicts something like “pre-sequestration”, where proteins that are currently being used for one complex purpose are very unlikely to be available for either reversion to past functions or future alternative uses.
http://www.evolutionnews.org/2.....f_tim.html
Severe Limits to Darwinian Evolution: – Michael Behe – Oct. 2009
Excerpt: The immediate, obvious implication is that the 2009 results render problematic even pretty small changes in structure/function for all proteins — not just the ones he worked on.,,,Thanks to Thornton’s impressive work, we can now see that the limits to Darwinian evolution are more severe than even I had supposed.
http://www.evolutionnews.org/2......html#more
Mathematically Defining Functional Information In Molecular Biology – Kirk Durston – short video
http://www.metacafe.com/watch/3995236
“a very rough but conservative result is that if all the sequences that define a particular (protein) structure or fold-set where gathered into an area 1 square meter in area, the next island would be tens of millions of light years away.”
Kirk Durston
Stephen Meyer – Functional Proteins And Information For Body Plans – video
http://www.metacafe.com/watch/4050681
The best evidence evolutionists have for gradual ascent of proteins?
A Man-Made ATP-Binding Protein Evolved Independent of Nature Causes Abnormal Growth in Bacterial Cells
Excerpt: “Recent advances in de novo protein evolution have made it possible to create synthetic proteins from unbiased libraries that fold into stable tertiary structures with predefined functions. However, it is not known whether such proteins will be functional when expressed inside living cells or how a host organism would respond to an encounter with a non-biological protein. Here, we examine the physiology and morphology of Escherichia coli cells engineered to express a synthetic ATP-binding protein evolved entirely from non-biological origins. We show that this man-made protein disrupts the normal energetic balance of the cell by altering the levels of intracellular ATP. This disruption cascades into a series of events that ultimately limit reproductive competency by inhibiting cell division.”
http://www.plosone.org/article.....ne.0007385
Thus evolutionists have not shown the “ascent” of even on functional protein.
Petrushka:
#64:
Regarding superfamilies and difficult transitions: has any biologist proposed that such transitions have taken place?
How do you suppose that new superfamilies and new folds emerged? By special creation? 🙂
According to the paper about the evolution of protein domains which I have many times quoted, about half of protein domains must be present in LUCA. You will say: but that is OOL, we are not debating OOL at this moment. That does not solve the problem, but it’s OK for me: one thing at a time.
But the other half, more than 1000 superfamilies, have emerged after, many of them, hundreds of them, in metazoa. Do you want to explain them, or we just give them for granted? Are biologists still scientists, interested in explaining what we observe?
So, if transitions did not happen, we are back to special creation. If that is your favourite hypothesis, just state it. It isn’t mine.
It seems to me that the papers argue that the transition from random sequence to functional sequence can take place seamlessly.
Absolutely not. The Weiss paper just confirms what we alredy knew: that superfamilies are distant islands of functionality. But to be sure of that, no paper is necessary. It is enough to take casual pairs of proteins from different superfamilies, imput them into BLAST, and verify the percent of homology. I have done that many times. You can do that yourself. No significant homology is found. That’s exactly why superfamilies are superfamilies. And two proteins from different superfamilies are as distant as it’s possible, at the primary structure level.
The only different view could be that there is something common in all functional proteins, which makes them in some way part of a generic island of sequences. But thw Weiss paper excludes exactly that.
Perhaps even the random sequence has functionality.
Absolutely not. That is well known. Most random aminoacid sequences do not even start to fold. In a living environment they would be only dangerous corpses. And of the sequences which fold, only a tiny part fod well. And of those which fold, only a few have specific and useful fucntions. To be functional, a protein must not only fold very efficiently, but also fold in a way which has some biologic possible use, and have an active site which has some biologic possible use, and be integrated in the environment where it originated, and be correctly regulated, and so on. Targets, again.
Almost any string of alphabetical letters will contain functional substrings — letters, letter pairs, even triplets found in words. The papers you linked argue that something like that is true of proteins.
Where? There is no evidence of that. Show me a substring of single protein domains which has selectable function. Anyway, the Axe paper debates that point very seriously.
The functional unit of proteins remains the domain. And the average domain length is about 130 AAs.
Petrushka (#65):
Apparently in the ID version of biology you can compute probabilities using hypothetical data.
In science, you always build models to explain known data. Models can use assumptions, possibly reasonable assumptions, where data are not yet fully known. And model must be evaluated quantitatively to establish of they are consistent with their assumptions, and if they are internally consistent.
Neo-darwinian evolution is the only model which seems not to care about that.
I know that sounds snide, but based on the papers you asked me to read, ID proponents have for some time been unjustifiably calculating probabilities on the assumption that every bit in the genome specifying a protein was critical.
It just sounds false. ID proponents have never done that. It is well known that in protein sequences individual sites have different relevance. Otherwise, every single aminoacid should be ultraconserved. That is the basics of biochemistry of proteins. That’s why ID proponents have never argue that the target of the search is a single structure. The target is a functional space, called target space.
Do you really believe that people like Behe, Axe, Abel and Durston, who have seriously been researching this problem for years, and have published about it, are not aware of this simple fact?
You can also read any post of mine on this blog in the last few years, and I challenge you to find one where I make such an assumption.
I will refrain from returning the favor. I don’t know what the probabilities are.
That’s certainly true. And, like many darwinists, you don’t seem to care.
I’ve seen nothing, however, that conflicts with the usefulness of a protein being a gradient rather than a step function.
I can’t deny that “a protein being a gradient rather than a step function” would certainly be useful, at least for darwinists. The problem is that it is not true.
And you have been shown lots of evidence and of arguments that conflict with the truth of a protein being a gradient rather than a step function. But you can always choose not to look at them, or just not to accept them. You could even discuss them. Your privilege.
gpuccio @ 62,
If the protein results in a change in the body plan that prevents it from reproducing at the same rate as a competitor, that protein has hurt it’s host’s chance of survival.
The host body plan can go extinct, because of that protein.
As in any engineering, we have feedback.
I note that Behe accepts common descent.
Well let’s see what Durston says:
Petruska could you please describe in detail the gradual origination of this following protein? (or any other biological protein for that matter?)
The Laminins – authors Peter Elkblom and Rupert Timpl:
“laminins hold cells and tissues together.” “Electron microscopy reveals a cross-like shape for all laminins investigated so far.”
http://www.truthorfiction.com/rumors/l/laminin.htm
Laminin Protein Molecule – diagram
http://www.soulharvest.net/res.....nner+2.png
Laminin Molecule – Electron Microscope Photograph
http://www.survivorbiblestudy......0slide.jpg
Laminin Protein Molecule – Louie Giglio – a very cool video
http://www.youtube.com/watch?v=F0-NPPIeeRk
Laminin is made up of 3712 amino acids,,, 20^3712 = 10^26822 ,,,To put this in terms similar to what ID theorist William Dembski would use, this protein molecule complex of 3712 amino acids is well beyond the reach of the 10^150 probabilistic resource available to the universe.
In fact Petruska, though the cross shape is merely suggestive, and not conclusive as I readily admit, I did not realize just how strong the evidence actually was for the suggestiveness until a molecular biologist tried to assert that pagan symbols could also easily be found. His primary example to refute the cross shaped laminin?
http://en.wikipedia.org/wiki/F.....n_1a0s.png
http://www.uncommondescent.com.....ent-325128
Now that was quite a stretch for him to make that association was it not? But why did he feel compelled to make such a flimsy rebuttal of a merely suggestive piece of evidence unless the case for Darwinism is non-existent in the empirical realm for the formation of proteins? If Darwinism had any evidence at all of proteins originating by natural means then surely this molecular biologist would not have stooped to such a level.
All Of Creation – Mercyme
http://www.youtube.com/watch?v=kkdniYsUrM8
I will repeat what I said on another thread. Most of the genes that code for proteins originated before the Cambrian, or at least they exist in single celled organisms. We may never be able to reconstruct that history.
But I’m not sure what you are asking. Are you commenting on the shape?
I’ve picked up rocks in Virginia in the shape of a cross. Actually, I’ve picked up handfuls of them.
If you believe that everything that has happened did so so inevitably, then you can compute some astronomical odds. For example, what are the odds of all your ancestors meeting at exactly the right time and place, enabling them to produce you?
Petruska, there are examples of completely unique genes and proteins in humans (at least 50 to 100) Thus can you give an example of just one protein/gene originating by natural means instead of so cavalierly pushing it back to a former age of pre-Cambrian miracles? Can you even give an example of mutations to existing genes producing morphogenesis of a new species? Can you produce any concrete empirical evidence whatsoever besides your blind faith that all this staggering complexity, that dwarfs out puny understanding in molecular biology, originated by purely material processes. Since I’ve been debating this for a few years and thus see no answer forthcoming from you or any other materialists, could you please tell me the answer of an easier question? Can you please tell me how the universe originated by material processes when no material, time or space, processes existed before the creation of the universe?
Testify To Love – Avalon
http://www.youtube.com/watch?v=P5TpPCEcI84
The rebuttals from the materialist on this thread are growing increasingly weak. Actually, they started off weak and have gone down hill.
Petrushka tells GP that he/she thinks ID proponents assume all protein specification (every “bit”) is critical. He/she suggests it is an assumption ID proponents have been making for “some time”, implying this to be a fatal flaw in their thinking and (apparently) through his/her long-time study of the ID argument he/she has come to this thoughtful conclusion about ID proponents.
In turn, GP has been abundently clear that this is not the case. He goes into much discussion of the protein domains. He reminds Petrushka that his/her suggestion would mean that every single amino acid would have to be ultraconserved, and challenges Petrushka to present a single ID proponent that makes such a claim. In the end, he ask Petrushka if he/she really believes that molecular scientists such as Behe, Abel, Axe, and Durston are not aware of this basic understanding of biology.
So what is Petrushka’s response?
As always, its to slip through your prior words without ackowledging their falsity, then, change the subject.
Petrushka hilariously retorts “I note that Behe accepts common descent”.
He/she then goes on the quote Durston saying exactly the opposite of his/her original claim.
Geeez.
Upright BiPed:
Thank you for saving me the time 🙂
Petrushka, by the way, I accept common descent too.
I’m not sure how the number 50 to 100 affects my characterization of “most.” It’s my understanding that ther are something like nine million unique gene in human intestional bacteria.
I would stand by my assertion that most genes have originated in microbes.
But looking at articles on human specific genes, I find that they may very well be modifications of inherited genetic material.
http://news.wustl.edu/news/Pages/11349.aspx
I suppose if you don’t accept common descent, you can always hope that a complete gene fell out of the sky, like Athena from the head of Zeus.
But where will your argument go if the new genes turn out to be just slight modifications of inherited sequences?
Fine, then we could be looking for common ground.
I can accept that there are vast unknown areas in biology. My inclination is to look for naturalistic explanations. I do this for two reasons: one, the search for naturalistic explanations has a long history of being productive, and two, the alternative leads to the rather sterile conclusion that some unknown agency did some unspecified thing(s) at unspecified time(s) and place(s) using unspecified methods for unspecified reasons.
there are some really difficult problems in science. Gravity is one.
Galileo found an equation that described gravity as acceleration, bu it took a couple hundred years for Newton to apply this to all objects, including suns and planets.
Then it took another couple hundred years for Einstein to iron out some inconsistencies. And we are still left with an incomplete theory.
At no point in the history of gravity, since Galileo, has there been any progress made by imputing demiurges to explain inconsistencies in data. But even Newton was tempted in this direction, so intelligence and competence are no barrier to this kind of thinking. It has a powerful hold on the human imagination.
I have no illusions that I am going to change anyone’s mind. My motive is to sharpen my own knowledge and abilities. I don’t know if I will continue much on this thread. It depends on whether I find areas that I think need clarifying.
I am completely unconvinced by the argument from probability. Such an argument would require a complete history of changes in genomes, in complete detail. The kind of detail you would need to argue that lotto winners were somehow rigging the game. (It’s been done, both the rigging and the catching.)
Simply arguing that the present condition is improbable carries no weight. Neither you nor I have the detailed history that would expose an instance of tampering with genomes.
If you wish to argue, as Michael Denton does, that existence in the form of physical constants is rigged to produce life, I have no interest in arguing against that. Whatever the history of existence, it does seem to produce life.
http://blog.taragana.com/scien.....hts-13218/
http://www.sciencemag.org/cgi/.....ce.1189406
Petrushka:
I appreciate your #76, and have no reason to question your personal beliefs about science.
I believe in science too, and I do believe it is supporting and will support the design scenario. I don’t consider the concept of design input as “non natural” and I am sure that a lot of details about how and when, and perhaps even with what modalities and purposes the input of design took place will be revealed by scientific investigation, in time, and if the correct interpretation paradigm (design) is allowed its due place in scientific reasoning.
I hope I have contributed to clarify at least some aspects of the probability argument, even if you are not impressed by it. On the contrary, I consider it supremely important.
Your questions have anyway given me the chance to express in detail some points which are very dear to my approach to ID. I thank you for that.
Petruska, since the entire universe “just fell out of the sky” to use your own words, (a question I asked you which you refused to touch by the way) then it is perfectly acceptable scientifically, to explain the sudden appearance of fossils in the fossil record to them “just falling out of the sky”.
The Cambrian Explosion – Back To A Miracle! – video
http://www.metacafe.com/watch/4112218
Deepening Darwin’s Dilemma – Jonathan Wells – The Cambrian Explosion – video
http://www.metacafe.com/watch/4154263
“The Cambrian Explosion was so short that it is below the resolution of the fossil record. It could have happened overnight. So we don’t know the duration of the Cambrian Explosion. We just know that it was very, very, fast.”
Jonathan Wells – Darwin’s Dilemma Quote
you mention the red blood cells of Tibetans as a beneficial mutation (out of over 100,000 cataloged detrimental mutations) so let’s look closer at this supposed “beneficial” mutation of yours:
Of Note: The new “beneficial mutations” found in Tibetans that allow them to survive in extremely high altitudes, with less oxygen, is actually found to result in a limit on the red cell blood count for Tibetans:
Tibetans Developed Genes to Help Them Adapt to Life at High Elevations – May 2010
Excerpt: “What’s unique about Tibetans is they don’t develop high red blood cells counts,”
http://www.sciencedaily.com/re.....143453.htm
Yet,,
Extremely fit individuals may have higher values—significantly more red cells in their bodies and significantly more oxygen-carrying capacity—but still maintain normal hematocrit values.
http://wiki.medpedia.com/Red_Blood_Cells
,,,Thus the authors of the Tibetan study are completely incorrect to imply that all high red blood cell counts found in humans are detrimental,,, Thus this clearly is a loss of overall functional information, and fitness, for Tibetans since Tibetans will now be found to have less of a capacity to work, due to their now restricted oxygen metabolism, than will other “extremely fit” humans in a “normal oxygen” environment. i.e. they gained a benefit by burning a molecular bridge as Dr. Behe would say: Want to name lactase persistance as an example Petruska?
Although a materialist may try to claim the lactase persistence mutation as a lonely example of a “truly” beneficial mutation in humans, lactase persistence is actually a loss of a instruction in the genome to turn the lactase enzyme off, so the mutation clearly does not violate Genetic Entropy. Yet at the same time, the evidence for the detrimental nature of mutations in humans is overwhelming for scientists have already cited over 100,000 mutational disorders.
Inside the Human Genome: A Case for Non-Intelligent Design – Pg. 57 By John C. Avise
Excerpt: “Another compilation of gene lesions responsible for inherited diseases is the web-based Human Gene Mutation Database (HGMD). Recent versions of HGMD describe more than 75,000 different disease causing mutations identified to date in Homo-sapiens.”
I went to the mutation database website cited by John Avise and found:
HGMD®: Now celebrating our 100,000 mutation milestone!
http://www.biobase-internation.....mddatabase
I really question their use of the word “celebrating”.
(Of Note: The number for Mendelian Genetic Disorders is quoted to be over 6000 by geneticist John Sanford in 2010)
“Mutations” by Dr. Gary Parker
Excerpt: human beings are now subject to over 3500 mutational disorders. (this 3500 figure is cited from the late 1980’s)
http://www.answersingenesis.or.....ations.asp
Myself, I find humans to be fearfully and wonderfully made:
Fearfully and Wonderfully Made – Glimpses At Human Development In The Womb – video
http://www.metacafe.com/watch/4249713
And I am thankful that we are fearfully and wonderfully made!
Natalie Merchant-Kind And Generous
http://www.youtube.com/watch?v=rdG618TMc5E
BA77:
I may fail to respond to a post for several reasons.
Most likely, I simply have nothing to say. I see no reason to post simply to say I disagree.
There’s also the problem that I simply don’t have time to formulate responses to hundreds of arguments. I have to pick those for which I have the strongest response.
At some point we all drop out of threads. They don’t go on forever. My goal is simply to post my best arguments and see what becomes of them. I prefer it when everyone brings their best game to the table.
Petruska you state:
“At no point in the history of gravity, since Galileo, has there been any progress made by imputing demiurges to explain inconsistencies in data. But even Newton was tempted in this direction, so intelligence and competence are no barrier to this kind of thinking. It has a powerful hold on the human imagination.”
Your approach to science is called methodological naturalism i.e. materialism, and this approach has a track record of consistently failed predictions that have severely hampered science (i.e. Rejection of Big Bang cosmology for nearly half a century and Junk DNA for two glaring failures)
Materialism compared to Theism within the scientific method:
http://docs.google.com/Doc?doc....._5fwz42dg9
You mention Gravity as if Gravity is understood to a purely material cause yet once again you are wrong in your assumption:
REPORT OF THE DARK ENERGY TASK FORCE
The abstract of the September 2006 Report of the Dark Energy Task Force says: “Dark energy appears to be the dominant component of the physical Universe, yet there is no persuasive theoretical explanation for its existence or magnitude. The acceleration of the Universe is, along with dark matter, the observed phenomenon that most directly demonstrates that our (materialistic) theories of fundamental particles and gravity are either incorrect or incomplete. Most experts believe that nothing short of a revolution in our understanding of fundamental physics will be required to achieve a full understanding of the cosmic acceleration. For these reasons, the nature of dark energy ranks among the very most compelling of all outstanding problems in physical science. These circumstances demand an ambitious observational program to determine the dark energy properties as well as possible.”
http://jdem.gsfc.nasa.gov/docs.....report.pdf
The Mathematical Anomaly Of Dark Matter – video
http://www.metacafe.com/watch/4133609
Dark matter halo
Excerpt: The dark matter halo is the single largest part of the Milky Way Galaxy as it covers the space between 100,000 light-years to 300,000 light-years from the galactic center. It is also the most mysterious part of the Galaxy. It is now believed that about 95% of the Galaxy is composed of dark matter, a type of matter that does not seem to interact with the rest of the Galaxy’s matter and energy in any way except through gravity. The dark matter halo is the location of nearly all of the Milky Way Galaxy’s dark matter, which is more than ten times as much mass as all of the visible stars, gas, and dust in the rest of the Galaxy.
http://en.wikipedia.org/wiki/Dark_matter_halo
Table 2.1
Inventory of All the Stuff That Makes Up the Universe (Visible vs. Invisible)
Dark Energy 72.1%
Exotic Dark Matter 23.3%
Ordinary Dark Matter 4.35%
Ordinary Bright Matter (Stars) 0.27%
Planets 0.0001%
Invisible portion – Universe 99.73%
Visible portion – Universe .27%
of note: The inventory of the universe is updated to the second and third releases of the Wilkinson Microwave Anisotropy Probe’s (WMAP) results in 2006 & 2008; (Why The Universe Is The Way It Is; Hugh Ross; pg. 37)
Romans 1:20
For since the creation of the world God’s invisible qualities—his eternal power and divine nature—have been clearly seen, being understood from what has been made, so that men are without excuse.
Myself I find that Newton’s comment in Principia, arguably one of the greatest if not the greatest work of science to ever be printed, to still ring loud and true:
“This most beautiful system of the sun, planets, and comets, could only proceed from the counsel and dominion of an intelligent Being. … This Being governs all things, not as the soul of the world, but as Lord over all; and on account of his dominion he is wont to be called “Lord God” ??????????? [pantokratòr], or “Universal Ruler”… The Supreme God is a Being eternal, infinite, absolutely perfect.”
Sir Isaac Newton – Quoted from what many consider his greatest science masterpiece “Principia”
And once again I find the fact that the evidence is in fact overwhelming for a Creator to be a source of great joy that I am extremely thankful for. As well I find the promise of eternal life that this Creator has bestowed on us through Christ to be true as well.
Kutless: Promise of a Lifetime – Live
http://www.tangle.com/view_vid.....2b35a1a968
I’ve just said I don’t like posting just to disagree, but here I am posting to disagree. There is no evidence or data that either supports or conflicts with big bang cosmology except that derived from observation and experiment.
At the moment the Big Bang has no implications at all for the beginning of existence. It has no implications for such philosophical or theological concepts as first cause. Many, if not most, physicists no longer regard it as the first event in physical existence.
Junk DNA is still junk, even if a percent or two of what was formerly regarded as junk has some function.
But the discovery that some former junk has function was made by mainstream biologists.
As for what inhibits or enables good science, hunches and intuition may inspire research, but they have nothing to do with outcomes of research. The vast majority of inspired ideas go nowhere in the face of data.
Petruska you state:
“At the moment the Big Bang has no implications at all for the beginning of existence. It has no implications for such philosophical or theological concepts as first cause. Many, if not most, physicists no longer regard it as the first event in physical existence.”
First you deny it has implications, which is clearly ludicrous, then even though it has no implications in your mind (a clear case of denialism) you state that “most” physicist no longer regard it as the first event of physical existence.
I sure wish you would clue me in to where you get all this “evidence” because I sure can’t find it on google: What I do find though is this:
The Creation Of The Universe (Kalam Cosmological Argument)- Lee Strobel – William Lane Craig – video
http://www.metacafe.com/watch/3993987/
Hugh Ross PhD. – Evidence For The Transcendent Origin Of The Universe – video
http://www.metacafe.com/watch/4347185
Formal Proof For The Transcendent Origin Of the Universe – William Lane Craig – video
http://www.metacafe.com/watch/4170233
“The prediction of the standard model that the universe began to exist remains today as secure as ever—indeed, more secure, in light of the Borde-Guth-Vilenkin theorem and that prediction’s corroboration by the repeated and often imaginative attempts to falsify it. The person who believes that the universe began to exist remains solidly and comfortably within mainstream science.” – William Lane Craig
http://www.reasonablefaith.org.....38;id=6115
Inflationary spacetimes are not past-complete – Borde-Guth-Vilenkin – 2003
Excerpt: inflationary models require physics other than inflation to describe the past boundary of the inflating region of spacetime.
http://arxiv.org/abs/gr-qc/0110012
“It is said that an argument is what convinces reasonable men and a proof is what it takes to convince even an unreasonable man. With the proof now in place, cosmologists can long longer hide behind the possibility of a past eternal universe. There is no escape, they have to face the problem of a cosmic beginning.” Alexander Vilenkin – Many Worlds In One – Pg. 176
It is also very interesting to note that among all the “holy” books, of all the major religions in the world, only the Bible was correct in its claim for a transcendent origin of the universe. Some later “holy” books, such as the Mormon text “Pearl of Great Price” and the Qur’an, copy the concept of a transcendent origin from the Bible but also include teachings that are inconsistent with that now established fact. (Ross; Why The Universe Is The Way It Is; Pg. 228; Chpt.9; note 5)
Then you had the audacity to say that Junk DNA prediction of materialists was only off by a “percent or two” (thanks for the laugh):
Functionless Junk DNA Predictions By Leading Evolutionists
http://docs.google.com/View?id=dc8z67wz_24c5f7czgm
Evolutionists were notoriously wrong for predicting that the +95% of the genome which did not directly code for proteins was junk but it is instead found that:
Nature Reports Discovery of “Second Genetic Code” But Misses Intelligent Design Implications – May 2010
Excerpt: Rebutting those who claim that much of our genome is useless, the article reports that “95% of the human genome is alternatively spliced, and that changes in this process accompany many diseases.” ,,,, the complexity of this “splicing code” is mind-boggling:,,, A summary of this article also titled “Breaking the Second Genetic Code” in the print edition of Nature summarized this research thusly: “At face value, it all sounds simple: DNA makes RNA, which then makes protein. But the reality is much more complex.,,, So what we’re finding in biology are:
# “beautiful” genetic codes that use a biochemical language;
# Deeper layers of codes within codes showing an “expanding realm of complexity”;
# Information processing systems that are far more complex than previously thought (and we already knew they were complex), including “the appearance of features deeper into introns than previously appreciated”
http://www.evolutionnews.org/2.....of_se.html
yeah Petruska what is one percent or 2 percent or 90 percent difference in being wrong amongst friends. You right I’m probably just being picky over the exact numbers here:
Matthew 10:31
“And even the very hairs of your head are all numbered. So don’t be afraid; you are worth more than many sparrows.”
You then state:
“The vast majority of inspired ideas go nowhere in the face of data.”
Then why in the world does the “inspired” idea of neo-Darwinism refuse to heed the crushing weight of scientific evidence found against it?
As Dr. Hunter says of Darwinism, religion drives science and it matters.
Switchfoot – Dare You To Move
http://www.youtube.com/watch?v=iOTcr9wKC-o
I said beginning of existence, not beginning of our universe.
Petruska since I do have a little clue in this area please do elaborate:
I will stand by the claim that the overwhelming percentage of junk DNA is just junk.
http://sandwalk.blogspot.com/2.....ncode.html
If you have a clue regarding big bang cosmology, why would you need elaboration when I say the Big Bang is no longer regarded as a necessarily unique event?
Obviously physics at this level is speculation, but so is philosophy and theology.
My point would be that there are mathematically consistent descriptions of a meta-universe in which big bangs are neither unique nor rare.
It might even be possible to detect other universes. Fun stuff.
Petruska, Please read the Vilenken quote from “Many World’s In One” carefully and then compare it to your materialistic meta-universe assumption, Shake well, let it sink in.
You finally cite a site (Moran’s) but clinging to his coattails you will be just as wrong and worse yet you will be a obstacle to scientific progress:
On the roles of repetitive DNA elements in the context of a unified genomic-epigenetic system. – Richard Sternberg
Excerpt: It is argued throughout that a new conceptual framework is needed for understanding the roles of repetitive DNA in genomic/epigenetic systems, and that neo-Darwinian “narratives” have been the primary obstacle to elucidating the effects of these enigmatic components of chromosomes.
http://www.ncbi.nlm.nih.gov/pubmed/12547679
Concluding statement of the ENCODE study:
“we have also encountered a remarkable excess of experimentally identified functional elements lacking evolutionary constraint, and these cannot be dismissed for technical reasons. This is perhaps the biggest surprise of the pilot phase of the ENCODE Project, and suggests that we take a more ‘neutral’ view of many of the functions conferred by the genome.” http://www.genome.gov/Pages/Re.....e05874.pdf
No Such Thing As ‘Junk RNA,’ Say Researchers – Oct. 2009
Excerpt: Tiny strands of RNA previously dismissed as cellular junk are actually very stable molecules that may play significant roles in cellular processes, http://www.sciencedaily.com/re.....105809.htm
Arriving At Intelligence Through The Corridors Of Reason (Part II) – April 2010
Excerpt: In fact the term ‘junk DNA’ is now seen by many an expert as somewhat of a misnomer since much of what was originally categorized as such has turned out to be pivotal for DNA stability and the regulation of gene expression. In his book Nature’s Probability And Probability’s Nature author Donald Johnson has done us all a service by bringing these points to the fore. He further notes that since junk DNA would put an unnecessary energetic burden on cells during the process of replication, it stands to reason that it would more likely be eliminated through selective pressures. That is, if the Darwinian account of life is to be believed. “It would make sense” Johnson writes “that those useless nucleotides would be removed from the genome long before they had a chance to form something with a selective advantage….there would be no advantage in directing energy to useless structures”.
http://www.uncommondescent.com.....n-part-ii/
Cells Are Like Robust Computational Systems, – June 2009
Excerpt: Gene regulatory networks in cell nuclei are similar to cloud computing networks, such as Google or Yahoo!, researchers report today in the online journal Molecular Systems Biology. The similarity is that each system keeps working despite the failure of individual components, whether they are master genes or computer processors. ,,,,”We now have reason to think of cells as robust computational devices, employing redundancy in the same way that enables large computing systems, such as Amazon, to keep operating despite the fact that servers routinely fail.”
http://www.sciencedaily.com/re.....103205.htm
3-D Structure Of Human Genome: Fractal Globule Architecture Packs Two Meters Of DNA Into Each Cell – Oct. 2009
Excerpt: the information density in the nucleus is trillions of times higher than on a computer chip — while avoiding the knots and tangles that might interfere with the cell’s ability to read its own genome. Moreover, the DNA can easily unfold and refold during gene activation, gene repression, and cell replication.
http://www.sciencedaily.com/re.....142957.htm
Welcome to CoSBi – (Computational and Systems Biology)
Excerpt: Biological systems are the most parallel systems ever studied and we hope to use our better understanding of how living systems handle information to design new computational paradigms, programming languages and software development environments. The net result would be the design and implementation of better applications firmly grounded on new computational, massively parallel paradigms in many different areas.
http://www.cosbi.eu/index.php/.....rticle/171
Astonishing DNA complexity demolishes neo-Darwinism – Alex Williams:
Excerpt: DNA information is overlapping-multi-layered and multi-dimensional; it reads both backwards and forwards; and the ‘junk’ is far more functional than the protein code, so there is no fossilized history of evolution…All the vast amount of meta-information in the human genome only has meaning when applied to the problem of using the human genome to make, maintain and reproduce human beings.
http://creation.com/images/pdf.....11-117.pdf
etc..etc..etc..
Petrushka, paradoxically, in this science & knowledge age, it is very difficult to change anybody’s mind. As for somebody today being unconvinced by the “argument from probability” (your post 77), you are right, and the problem is that the probability argument has now been shifted to genetics and thus it becomes incomprehensible to most non-experts and ordinary people who have no clue about specialized mathematics, biology and genetics related intricacies. (Plus you require knowledgeable mathematically and philosophically trained scientists, biologists and geneticists to meaningfully address the issue.) To answer you shortly — it is not so much that that winners are rigging the game, but that the game itself is being rigged for them by some invisible intelligent mind.
You think that “junk DNA is plain junk” (your post 83), and, really, who am I to argue with such an argument ?! As the great thinker Gautama Buddha put it — “The mind is everything. What you think you become.” Or, to use another description:
“A man who thinks himself a chicken is to himself as ordinary as a chicken. A man who thinks he is a bit of glass is to himself as dull as a bit of glass. It is the homogeneity of his mind which makes him dull, and which makes him mad. It is only because we see the irony of his idea that we think him even amusing; it is only because he does not see the irony of his idea that he is put in Hanwell at all. (Hanwell was a lunatic asylum near London, England.) In short, oddities only strike ordinary people. Oddities do not strike odd people. This is why ordinary people have a much more exciting time; while odd people are always complaining of the dulness of life.”
http://www.cse.dmu.ac.uk/~mwar.....rtho14.txt
All, I can say, I feel sorry for you and for all the odd people who think like you do.
I don’t have the math to evaluate competing cosmologies. I tend to trust physicists who describe possible things consistent with current evidence, but who stop short of pronouncing one of them to be certain.
I have a hearing problem and can’t really get much information from videos. I tend to avoid them.
If that means what I think it means — that functional sequences in DNA are not conserved or subject to selection, then I will bet the conclusion is in error. Something got overlooked.
But it’s a short snippet. Perhaps I don’t read it as the authors intended.
Petruska, the only known entity that has the “scientific” sufficiency to cause the origination of space-time matter-energy in the big bang is transcendent information. Transcendent Information is now shown to be its own unique, and independent, entity from quantum entanglement, and teleportation experiments, especially with the refutation of the hidden variable argument. i.e. It is the only entity that is shown to be “real” and is also shown to be completely transcendent of space-time, matter-energy. To appeal to any material entity, such as you have with the meta-universe has left the bounds of science. i.e. you are imposing your belief instead of looking for a casually adequate solution within science.
Petruska, that was the concluding statement of ENCODE!!!!
I haven’t read the book, but he seems to be one of the people I had in mind:
My statement was that our universe could be one of many, and that the big bang was not considered the beginning of existence.
Yes, and I believe I already posted a link to an analysis. Such things get sorted out over time. I’m patient.
Meanwhile, the consensus remains that most junk is junk.
I have no interest in first causes.
At least no interest in trying to nail down a first cause. I humbly accept my limitations on that one.
Within the observable frame of existence, I am interested in regular phenomena and lawful behavior.
Petruska though I have other issues with the quote you cite, which is taken out of context (please read William Lane Craig’s review of Vilenkin’s “Many World’s In One” for full context) let’s concentrate on this last sentence of your quote:
“our descent from the center of the world is now complete.”
Please watch this video:
The Known Universe by AMNH
http://www.youtube.com/watch?v=17jymDn0W6U
Please stop the video at the Cosmic Microwave Background Radiation, and then please note the centrality of the Earth’s position in the universe. If we are now completely removed from any cosmic significance , as the quote you cite directly implies, why in the blue blazes are we in such a privileged position of centrality from our unique perspective of observation in the universe?
“There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy.”
William Shakespeare – Hamlet
But hey Petruska I would of been cool with not being central in the universe because,,,
I’m Not Cool – Scott Krippayne
http://www.youtube.com/watch?v=P7XR6t8YshQ
Reading about Vilenkin, it strikes me that he posits an infinite number of universes and an infinite number of earths. In such and existence, arguments about probability make no sense. Everything happens somewhere.
Are you sure you intended for me to look at Vilenkin and not at someone else?
Petruska I hate to be the one to break the news to you but you ain’t interested in anything that might point to non-material causes, such as intelligence, which is completely funny since purely material processes have never been shown to produce any information whatsoever and here you are on this blog producing information in abundance. Isn’t it ironic?
Alanis Morissette – Ironic
http://www.youtube.com/watch?v=8v9yUVgrmPY
I have already mentioned having a hesring problem. Trying to follow complex oral arguments, particularly in internet videos, just hasn’t been available to me. When I watch movies and TV, I often turn captions on.
My wonderful DVD player has an option to compress the dialog channel.
Here is Craig’s review of Vilenkin:
http://www.reasonablefaith.org.....38;id=7289
Petruska the video is visual
I can’t think of anything in the history of science that supported a non-material cause.
I mentioned Newton’s dalliance with divine corrections of planetary motions.
In general, when science encounters an inadequacy in a theory, it goes to work refining methodology and analysis.
Petruska, the entire materialistic framework you insist on using is grossly inadequate to explain origins. But alas now I’m repeating myself ad nauseum and you are no closer to being reasonable than you were several days ago. thus I will move on to other more fruitful areas.
I think you must have missed the part of the lecture where they pointed out that any point in the universe could look out an think itself the central point.
But I’m why this would matter anyway in a multiverse. Anything happening in our universe, including its birth and death would be trivial.
I have worked very hard to avoid discussing origins — either the origin of existence or the origin of life.
Going further, I have tried to limit the scope of my posting to the post Cambrian.
This has been difficult, and I have lapsed.