Uncommon Descent Serving The Intelligent Design Community

ID Foundations 15(c) — A FAQ on Front-Loading, thanks to Genomicus

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Onlookers, Geno concludes for the moment with FAQ’s:

____________________

Geno: >>

A Testable ID Hypothesis: Front-loading, part C

In the last two articles on front-loading, I explained what the front-loading hypothesis is all about and some research questions we can ask from a front-loading perspective. This article will be an FAQ about the front-loading hypothesis. So, without further introduction, let’s begin (note: some of the content of this FAQ can be found in the previous two articles).

  1. What is front-loading?

Front-loading is the investment of a significant amount of information at the initial stage of evolution (the first life forms) whereby this information shapes and constrains subsequent evolution through its dissipation. This is not to say that every aspect of evolution is pre-programmed and determined. It merely means that life was built to evolve with tendencies as a consequence of carefully chosen initial states in combination with the way evolution works.” Mike Gene, The Design Matrix: A Consilience of Clues, page 147

In short, this ID hypothesis proposes that the earth was, at some point in its history, seeded with unicellular organisms that had the necessary genomic information to shape future evolution. Necessarily, this genomic information was designed into their genomes.

 

  1. How is front-loading different from directed panspermia?

 

In a paper published in the journal Icarus, Francis Crick and Leslie Orgel proposed the hypothesis of directed panspermia. According to this hypothesis, the earth was intentionally seeded with life forms by some intelligence. The front-loading hypothesis goes a step further and proposes that these life forms contained the necessary genomic information to shape the course of future evolution. For example, the origin of metazoan complexity would have been planned and anticipated by the genomic information in the first genomes. Thus, the front-loading hypothesis is inherently teleological and an ID hypothesis.

 

  1. Does front-loading propose that all the genes found in life were in the first life forms?

 

No, it does not. Front-loading does not suggest that all genes were there from the start. Indeed, many genes found in modern life forms are probably the result of purely unplanned mechanisms (gene duplication and subsequent divergence, for example). Nevertheless, genes essential for the origin and development of the metazoan body plan would be present in the first genomes (or have homologs in the first genomes).

 

  1. If genes necessary for the origin of metazoan life forms were placed, then random mutation would have destroyed them and they would decay, right?

 

This is a common objection to the front-loading hypothesis, but it can be easily answered. These genes would be given an important function in the first life forms, such that they would be preserved across deep time. Front-loading doesn’t involve much of simply turning genes (that were previously unexpressed) on at some given time.

 

  1. How could sophisticated molecular systems be front-loaded?

There are two basic solutions to the problem of front-loading sophisticated molecular systems, but more research is needed so that we can find out exactly how these solutions would work in practice. In theory, however, there’s the “bottom up” approach and the “top down” approach to front-loading molecular systems. In the “bottom up” approach, the original cells would contain the components of the molecular machine we want to front-load, but these components would be carrying out functions not related to the function of the molecular machine. Then, somehow (here’s where we need research), something causes them to associate such that they fit nicely with each other, forming a novel molecular machine.

The “top down” approach proposes that the first cells had a highly complex molecular machine, composed of, say, components A, B, C, D, E, F, G, H, and J. If we want to front-load a molecular machine composed of components A, B, C, and D, then this highly complex molecular machine contains a functional subset of A, B, C, and D. In other words, components E, F, G, H, and J would simply have to be deleted from the highly complex molecular machine, resulting in a molecular machine composed of A, B, C, and D. This model is actually testable. Under this model, we would tentatively predict that a homologous system of a molecular machine will be more complex if it is more ancient than the molecular machine.

  1. What testable predictions does the front-loading hypothesis make?

There are several testable predictions the front-loading hypothesis makes:

  1. Cytosine deamination. Of the three bases in DNA (adenine, guanine, and cytosine) that are prone to deamination, cytosine is the most likely to undergo deamination. This ultimately results in a C –> T transition. Cytosine deamination often causes severe genetic diseases in humans, so why would a front-loader choose cytosine as a base in DNA? It has been observed that C –> T transitions result in a pool of strongly hydrophobic amino acids, which leads to the following prediction from a front-loading perspective: a designer would have chosen cytosine because it would facilitate front-loading in that mutations could be channeled in the direction of increased hydrophobicity. This prediction would be confirmed if key protein sequences in metazoan life forms were the result of numerous C –> T transitions.
  2. The genetic code. The front-loading hypothesis proposes that the universal optimal genetic code was present at the dawn of life: in other words, we won’t find precursors of sub-optimal genetic codes, because the genetic code was optimal from the start. Further, the front-loading hypothesis predicts that all 20 amino acids would have been used in the first life forms, and that the transcription, translation, and proof-reading machinery would have all been present at the start of life on earth.
  3. Biological complexity. Front-loading predicts that the last universal common ancestor (LUCA) was quite complex, complete with genes necessary for the origin and development of metazoan life forms.
  4. Protein sequence conservation. In eukaryotes, there are certain proteins that are extremely important. For example, tubulin is an important component of cilia; actin plays a major role in the cytoskeleton and is also found in sarcomeres (along with myosin), a major structure in muscle cells; and the list could go on. How could such proteins be front-loaded? Of course, with some of these proteins they could be designed into the initial life forms, but some of them are specific to eukaryotes, and for a reason: they don’t function that well in a prokaryotic context. For these proteins, how would a designer front-load them? Let’s say X is the protein we want to front-load. How do we go about doing this? Well, firstly, we can design a protein, Y, that has a very similar fold to X, the future protein we want to front-load. Thus, a protein with similar properties to X can be designed into the initial life forms. But what is preventing random mutations from basically destroying the sequence identity of Y, over time, such that the original fold/sequence identity of Y is lost? To counter this, Y can also be given a very important function so that its sequence identity will be well conserved. Thus, we can make this prediction from a front-loading perspective: proteins that are very important to eukaryotes, and specific to them, will share deep homology (either structurally or in sequence similarity) with prokaryotic proteins, and importantly, that these prokaryotic proteins will be more conserved in sequence identity than the average prokaryotic protein. Darwinian evolution only predicts the first part of that: it doesn’t predict that part that is in bold text. This is a testable prediction made exclusively by the front-loading hypothesis.

 

  1. Does the front-loading hypothesis suggest that evolution was programmed?

 

No. Front-loading does not propose that all biological innovations were the result of planning and teleology.

 

Conclusion

 

The more I discuss front-loading with its opponents and proponents, the more I will add to this FAQ. Please add any questions, comments, etc., below.

 

About me

Over the years, I have become quite interested in the discussion over biological origins, and I think there is “something solid” behind the idea that teleology has played a role in the history of life on earth. When I’m not doing multiple sequence alignments, I’m thinking about ID and writing articles on the subject, which can be found on my website, The Genome’s Tale.

I am grateful to UD member kairosfocus for providing me with this opportunity to make a guest post on UD. Many thanks to kairosfocus.

Also see The Design Matrix, by Mike Gene.  >>

____________________

So, here we have one specific model for how ID could possibly have been done. Obviously, not the only possibility, but a significant one worthy of investigations. END

Comments
I’ll repeat my question for a third time, since Genomicus has ignored it in the two previous threads, and it became eminently important again in the light of his last comment on this thread: Genomicus said: “The prediction I proposed goes like this: you find a gene in all eukaryotes, and by comparing its sequences across various eukaryotic taxa, you find that it’s probably very important to eukaryotes. On the other hand, you find a gene in all eukaryotic taxa, but it doesn’t seem to be all that important. Front-loading predicts that the former gene is far more probable to share deep homology with prokaryotic genes than the latter gene.” So, what you are saying here is the following: A gene that is highly homologous across eukaryotic taxa is more likely to also be highly homologous in prokaryotic taxa than a gene that is not highly homologous among eukaryotes. That would be a pretty straightforward prediction of ANY theory that assumes common descent. I don’t understand why you think that only frontloading would make this prediction? molch
eigenstate: You stated that:
Because you’ve defined “important” in terms of our observations, because “important” is only determined as post-hoc assessment, then BY DEFINITION your prediction will true. In order for your prediction to hold scientifically, you would have to: 1. Identify which protein sequences you identify as “important”, and declare why they are important, INDEPENDENT of any observation we may have in the field regarding their conservation. 2. Propose those protein sequences as sequences that will be highly conserved as your prediction to be verified by observation. 3. Check our observations. If the basis for “important” is NOT derived from our observations regarding conservation, and we find conservation for those very same sequences you predicted, your golden, and champagne corks start popping all around.
I'm not sure if you quite understand this prediction (that might very well be my own fault in my manner of explaining it). This prediction states that prokaryotic homologs of important genes in eukaryotes and multicellular taxa will be well conserved in sequence identity, more so than the average prokaryotic protein (it is my understanding that you have read part of The Design Matrix; one of the chapters in that book discusses protein sequence conservation across different prokaryotic taxa). So, how do we find out which eukaryotic genes are important, independent of this prediction? That's fairly simple. You can (a) check its levels of sequence conservation across eukaryotic taxa, (b) take a look at its substitution rates per an amount of time relative to other genes, (c) delete that gene in different organisms. Once we find this gene is important in eukaryotes, we can then predict that its prokaryotic homolog will be well-conserved in sequence identity across prokaryotic taxa - more so than the average prokaryotic protein. Genomicus
@Genomicus,
Will reply to you ASAP. Thanks for the link on Mike Gene’s input.
No hurry. eigenstate
eigenstate: Will reply to you ASAP. Thanks for the link on Mike Gene's input. Genomicus
My stance is we exist and there is only one reality behind that existence. It is also my stance that we can determine that reality and that it matters. Joe
Thanks Joe! I think I've got a handle on your stance now. No more questions for the present. Bydand
I do not know what the originally designed populations were, nor what planet they originally inhabited. I infer that living organisms were designed and I understand what we now observe is the result of many, many generations of heritable variance. And again, we can have evolution and not have universal common descent. We can have evolution without humans sharing a common ancestor with knuckle-walkers/ having a knuckle-walker for an ancestor. We can have evolution without having an "inner fish"-> of course except when we eat fish, then we will have an inner fish :) We can have evolution without ever having a new, useful multi-protein configuration. Joe
Thanks for that, Joe. Noted - you're NOT a fan of front-loading. But from posts above, you do seem to believe that OoL was designed, and you have no problem with the fact that evolution has happened since, by means of directed mutations. How is this different from front-loading? Bydand
I don't adhere to front-loading. I do understand its basic concepts and I do understand that there is more than just one flavor. That said I have looked at and into the "evidence" for the OoL and evolution via unplanned, blind, mindless and mechanistic processes and have yet to find anything that would show thiose processes can construct new, useful multi-protein configurations. I then ask myself if there are any other types of processes that could account for multi-protein configurations. And the answer is always "design"- either direct or via a targeted search. Joe
1- Yes I have- to wit: a- the OoL- as in if the OoL was via design then it is a safe bet that evolution is also by design, ergo the mutations are by design b- break-down the internal programming much as one would do in order to understand a computer's program 2- Spetner's book has as much evidence and data for his claims as evos have for theirs. However Spetner offers better logic and reasoning to support his claims. 3- Directed in the same sense computer programs (especially GAs) are directed Joe
Joe, this is a thread about front-loading, and I was trying to get your take on that, is all. I'm not adhering to any particular position here. Why are you trying to avoid telling me what your evidential base is for adhering to your position on front-loading? Bydand
1. No, you didn't, actually, old chap. I've looked again, but if I missed it, please link, and I'll make full apology. 2. Dr Spetner's book contains no evidence or data to support the hypothesis. If I'm wrong, perhaps you could provide a suitable quote from the book. 3. Determinism "The philosophical doctrine that every state of affairs is the inevitable consequence of antecedent states of affairs." So you are not a determinist, but you believe that evolution (with which ID is OK, remember?)is in some form directed. Right? Bydand
And I am trying to get from any evo is any evidential reason they might have for adhering to their position. Ya see I find it useless and fruitless to discuss ID with people who will not ante-up by telling us what they accept. Without that all evos do when presented with the evidence for ID is say "That thar ain't no evidence for ID! No it ain't!" Joe
1- I told you how, you just won't accept it for some reason 2- Dr Spetner has it at any change above and beyond point mutations- he has a book explaining why 3- Define "determinist"? I do not believe things are pre-determined. Joe
Yes, your link is purely equivocation as ID is not anti-evolution. And I cannot predict what any given designer will design next but that does not mean the design will come about via stochastic processes. IOW just because we are ignorant of the internal programming of living organisms does not mean mutation is a stochastic process. As far as anyone knows mutations are as directed as computer programs are. And THAT is why the OoL means EVERYTHING. Joe
@14.1.1.1.4 Thanks for clarification, Dr. Liddle - that's why I've been careful to use the term "result of a stochastic process" Perhaps I should have said "non-intentional stochastic process" for the complete avoidance of doubt? Whatever. What I'm trying to get from Joe is any evidential reason he might have, (not just a visceral dislike of "evolutionism"), for adhering to a "front-loading" hypothesis. Bydand
@14.1.1.1.1 I don't know that it HAS been determined in all cases, Joe. I'm not "declaring" anything. That's why I asked if you know of a way to do it. But it seems you do not.Perhaps you have another reason for supporting a "front-loading" hypothesis? And are you a determinist? Bydand
No, it's not "equivocation", Joe. It means "non-deterministic". So does random, usually, although some people use it to mean "non-intentional" or "equiprobable" as well. So a process can be intentional AND stochastic at the same time. It's not an either/or. And obviously we can't tell whether any given mutation is stochastic, because that makes no sense as a question. The right question is whether mutation is a stochastic process, which we know they, because mutation events are not predictable, individually, but only probabilistically, by determining the probability distributions through observation of their frequencies. Elizabeth Liddle
Nice equivocation- BTW "stochastic" 1 : random; specifically: involving a random variable (a stochastic process) 2 : involving chance or probability : probabilistic (a stochastic model of radiation-induced mutation) Joe
Here you go: http://scholar.google.co.uk/scholar?hl=en&q=evolution&as_sdt=1%2C5&as_ylo=&as_vis=0 By the way, "stochastic" means "non-deterministic". Are you a determinist, these days, Joe? Elizabeth Liddle
OK please point me to this alleged "vast body of scientific literature on the subject".
By the way, did you ever come up with a way of determining whether any given mutation was the result of a “directed” or “stochastic” process?
Have you? Ya see evolutionism claims all mutations are via stochastic processes and I was wondering how that was determined. So perhaps if you could tell us that ten we could tell you how to tell if they are directed. But if your position can just declare all mutations to be stochastic then it is obvioulsy OK for ID to declare at least some are directed-> same standards. Joe
Well, of course I understand that your position is that there is NO science whatever that supports "evolutionism", so I'm not going to bother arguing with that, just point you to the vast body of scientific literature on the subject. By the way, did you ever come up with a way of determining whether any given mutation was the result of a "directed" or "stochastic" process? That would be a great advance for some versions of the "front-loading" hypothesis! Bydand
Whatever Bydand- I am just saying that the way to show us IDists what a "real" hypothesis looks like is to actually produce one tat would support evolutionism. That means all criticisms of this ID hypothesis are from whiners who can't or won't produce one for their position. Joe
Joe, I really don't see any "whining" here. Genomicus and Eigenstate seem to me to be having a civilised debate about "front-loading"; which is both interesting and a pleasure to read, whichever point of view one tends to support. No insults, no ad-hominems, and apparent respect for each others' views, despite disagreement. For once, there was no-one acting like little (or even big) babies. Bydand
@Genomicus, Mike Gene has some comments on your hypothesis (and my objection about prediction) on his blog you may be interested to read: A reason for cytosine deamination FYI. eigenstate
@Genomicus, OK, I appreciate your laying out the conditions you understand would obtain in order to falsification. I recognize your quote as something you earlier provided, and to which I also responded earlier. This isn't a falsifiable prediction for the same reason I pointed out before -- YOU DEFINE YOUR TERMS SO THAT THEY ARE SELF-FULFILLING. You say, in closing:
This prediction would be falsified if the prokaryotic homolog of an important, well-conserved eukaryotic protein was found be no more conserved in sequence identity than the average prokaryotic protein. So, this prediction can be very easily falsified.
Because you've defined "important" in terms of our observations, because "important" is only determined as post-hoc assessment, then BY DEFINITION your prediction will true. In order for your prediction to hold scientifically, you would have to: 1. Identify which protein sequences you identify as "important", and declare why they are important, INDEPENDENT of any observation we may have in the field regarding their conservation. 2. Propose those protein sequences as sequences that will be highly conserved as your prediction to be verified by observation. 3. Check our observations. If the basis for "important" is NOT derived from our observations regarding conservation, and we find conservation for those very same sequences you predicted, your golden, and champagne corks start popping all around. But as it is, you have a tautology: 1. Important protein sequences will be highly conserved, more than average. 2. By "important" we mean "conserved more than average". If you think that's incorrect, there's an easy way to show I'm mistaken: define "important" without making any use of the observed conservation dynamics. If can tell us why they are important INDEPENDENT of their conservation, why they will drive high conservation, then when we observe that conservation, high fives! I'll again refer to GR as a good example of the principle you're missing here. GR did not -- does not -- need to take heed of any heed of the observed dynamics of Mercury's orbit to predict the precession of Mercury's perihelion. Einstein did not need to be even vaguely aware of that data for him to make his prediction. His model for GR produced those predictions, regardless of what had been observed or not. The prediction was thus "blind" to the observations by which it could be tested, and that is the crucial key for the test. This is why that prediction carries epistemic weight, because it couldn't cheat, as you are (unwittingly) trying to do and offer a tautology through postdiction. So, thank you for the response in efforts to provide some basis for falsification based on conservation. But as it stands (and this is now pass two on this without your addressing the tautology problem previously), it cannot be falsified. For any given protein sequence X, if it's pointed out that it's NOT highly conserved, you're safe. You just say "that wasn't an important sequence!". If I ask "how do you determine whether it was important?", all you have provided us so far is the answer "It depends on whether it is highly conserved. Important sequences will be highly conserved". Hopefully the circular nature of this problem is clear, now? eigenstate
Each of these predictions can be falsified in some way. I'll start with just one so that we don't get bogged down with lots and lots of text. Since the FLH prediction (note: you coined the term "Genomicus-Front-Loading-Theory." However, it needs to be emphasized that front-loading is not a theory, but a hypothesis)concerning protein sequence conservation can be most easily falsified in the most clear-cut way, I'll use that example. So, to summarize this prediction:
In eukaryotes, there are certain proteins that are extremely important. For example, tubulin is an important component of cilia; actin plays a major role in the cytoskeleton and is also found in sarcomeres (along with myosin), a major structure in muscle cells; and the list could go on. How could such proteins be front-loaded? Of course, with some of these proteins they could be designed into the initial life forms, but some of them are specific to eukaryotes, and for a reason: they don’t function that well in a prokaryotic context. For these proteins, how would a designer front-load them? Let’s say X is the protein we want to front-load. How do we go about doing this? Well, firstly, we can design a protein, Y, that has a very similar fold to X, the future protein we want to front-load. Thus, a protein with similar properties to X can be designed into the initial life forms. But what is preventing random mutations from basically destroying the sequence identity of Y, over time, such that the original fold/sequence identity of Y is lost? To counter this, Y can also be given a very important function so that its sequence identity will be well conserved. Thus, we can make this prediction from a front-loading perspective: proteins that are very important to eukaryotes, and specific to them, will share deep homology (either structurally or in sequence similarity) with prokaryotic proteins, and importantly, that these prokaryotic proteins will be more conserved in sequence identity than the average prokaryotic protein. Darwinian evolution only predicts the first part of that: it doesn’t predict that part that is in bold text. This is a testable prediction made exclusively by the front-loading hypothesis.
This prediction would be falsified if the prokaryotic homolog of an important, well-conserved eukaryotic protein was found be no more conserved in sequence identity than the average prokaryotic protein. So, this prediction can be very easily falsified. Genomicus
In physics it's called symmetry breaking. Petrushka
eig: "how you suppose that’s connected to..." var TGG = Tryptophan the variable tgg representing the aa trytophan. i just stuck on a non-global javascript var, not to make a C comparison, just a variable/protocol analogy. My open question to anyone is how a non-intelligent law/force determined which specific molecules will represent aa's etc. In the same way in your example, you chose: int x=4 You had several alpha-numeric/case sensitive characters to choose from. But you chose x. x has nothing to do physically or chemically with 4. The genetic code seems to be compiled in the same manner. representations/variables that are executing functions based only on the fact that they have been assigned to do. It seems in a way like maxwells demon had done some tinkering. junkdnaforlife
@junkdnaforlife,
How would the protocol necessary for the origin of the genetic code get established by chance-necessity?
Dunno. As someone who just follows that are of research casually, but with interest, my answer is it was before to you, up a couple of replies: RNA-world scenarios seem the most likely and promise hypotheses going, but there's not much to go on in terms of hard pathway. Progress is being made, and has picked up considerably in the last few years, but this extraordinarily difficult in terms of forensics. I can say that I know one hypothesis that isn't a serious contender -- the "random shuffle" one-time luck scenario where 784 unlikely things have to happen simultaneously, or whatever tornado-in-a-junkyard scenario IDers suppose is the alternative to God doing it. Like the rest of biology, the educated bets are on a series of incremental steps that do not stretch any terrific odds, and may be highly likely or even inevitable, given the physical environment. Making headway on matching plausible environments for that time frame and conducive to these chemical pathways will be the main trick. But it's science, no magic or impossible odds invoked. Just hard work in uncovering the most likely and empirically supported pathways. I'm not at all aware how you suppose that's connected to "var TGG = Tryptophan", which seems vaguely like the C code I was offering, above. How do your program statements connect in here? eigenstate
To all evos who are complaining/ objecting about this (front-loading) hypothesis- Please produce a testable hypothesis for your position- complete with predictions and falsifications. Until you do that your "complaints" are nothing but a child's whine. IOW show us how it is done with the reingning paradigm. Or continue to act like little babies- your choice. Joe
@Genomicus, If you want to pick out one point that is central to my objections, please look at the question of falsification. For your predictions (and if you start with just one, that's fine), what are the falsifications for your predictions? That's a good place to focus, I think, because it will naturally work toward excluding tautologies in your predictions -- a definition can't be falsified. For example: "GFLT predicts the nanodesigner would use cytosine as a base". That suggests a very simple scenario for falsification: cytosine is NOT found as a base for DNA in nature. So far so good. Since we understand from our observations that cytosine IS a base, all we have left is to show that cytosine as a choice is a NECESSARY product of your hypothesis. "The nanodesigner had to chose cytosine due to...", etc. Get to that point, and you're golden. But falsification I'd say is the single best question here to look at, at this point. eigenstate
eig: "The “int x” declares storage space for a 16 bit number, and the “=4? tells the compiler to generate the bit pattern for 4 (“0000000000001100?) and copy it to the location for the variable “x” we just allocated." int x = 4 var a = 10 var TGG = Tryptophan c = 299792458m/s You can swap your int x =4 with anyone of the variables and the story is the same. How would the protocol necessary for the origin of the genetic code get established by chance-necessity? junkdnaforlife
eigenstate: Thanks for your comments. You've posted quite a few rather lengthy comments, and I haven't got all the time in the world (I'll see if I can work on a response though); so, for the moment, is there any one comment of yours that you'd especially like me to address? Genomicus
Eigenstate, Well put. I agree wholeheartedly. champignon
@champignon,
A slight quibble: It’s not necessarily a problem if a theory makes a prediction that is already known to be true (aka a ‘postdiction’). General relativity ‘postdicted’ the precession of Mercury’s orbit, yet that was one of its most spectacular confirmations.
Yes, I think I used the precession of Mercury's perihelion as an example with Genomicus here on this thread. The "arrow" matters -- you've used "postdiction", which may be a more effective term, I grant. With Mercury's orbit, the chain wasn't "observe precession => adjust GR accordingly". GR entailed that prediction, as you know. The complaint from me about "already known" arises when our observations DRIVE changes to the hypothesis. Changing the hypothesis is OK, too, but that resets the process; you have another hypothesis, and the original one has been superceded.
Imagine a scenario in which GFLT necessarily entailed the use of cytosine as a base but evolutionary theory somehow predicted a different base. In that case, the confirmed prediction would count in GFLT’s favor, even though we already know that cytosine is a DNA base.
Absolutely. And right or wrong, that would be a point of substance in ID's favor. There, we would have a contest, something we could (possibly) resolve empirically. Postdictively, even. In your scenario, though, the "arrow" goes: GFLT=>cytosine entailed => cytosine observed. Given what Genomicus is proposing, the arrow goes the wrong way: cytosine observed => GFLT.
That said, I agree with the gist of your comment. Genomicus needs to come up with distinct falsifiable predictions, and show that they are confirmed, before GFLT will be taken seriously.
Roger. I take it seriously now, or wouldn't bother to comment (and am sure in that sense, you do, too). GFLT needs to be "at risk" in its predictions, to carry scientific weight. That risk can be 'retroactively applied', like the precession of Mercury's perihelion under GR, but the risk has to be derived from the hypothesis' internal resources, not determined by existing facts and conditions. Nothing in GR is "patched" to account for the precession; it flows necessarily, unavoidably from the model. Even if we were previously aware of Mercury's orbit dynamics (and we were), GR necessarily entails that outcome, without having to take counsel of that particular observation. eigenstate
Hi eigenstate,
For example, if you say: “GFLT predicts the use of cytosine as a base”, we immediately are arrested by the prospects of falsification: we already KNOW cytosine is a DNA base. Your prediction cannot be false!
A slight quibble: It's not necessarily a problem if a theory makes a prediction that is already known to be true (aka a 'postdiction'). General relativity 'postdicted' the precession of Mercury's orbit, yet that was one of its most spectacular confirmations. Imagine a scenario in which GFLT necessarily entailed the use of cytosine as a base but evolutionary theory somehow predicted a different base. In that case, the confirmed prediction would count in GFLT's favor, even though we already know that cytosine is a DNA base. That said, I agree with the gist of your comment. Genomicus needs to come up with distinct falsifiable predictions, and show that they are confirmed, before GFLT will be taken seriously. champignon
Hey Genomicus, It's great if you respond to the other things I've said in this thread, but here's a question that kind of eclipses all that, and goes right to the heart of the problem here (as I see it): For your predictions, what are the conditions for falsification? That is, for each of your predictions, what should we NECESSARILY expect to find in our observations if your hypothesis is NOT correct? I think if you can provide the criterion for falsification, you will have gotten somewhere with this. I think it will also illustrate effectively the problems you have in your predictions right now. For example, if you say: "GFLT predicts the use of cytosine as a base", we immediately are arrested by the prospects of falsification: we already KNOW cytosine is a DNA base. Your prediction cannot be false! That is maybe an effective way to put some substance in your predictions. If your predictions can't be falsified, then you have to start over. Evolution, for example, predicts that you will find the most complex fossils and body plans in later strata, and more rudimentary forms in the earlier strata. That is what we have found, so far, but we are always discovering new fossils. If we were to start finding lots of rabbit fossils in the pre-Cambrian layers, this prediction of evolution would be falsified. With your "aliens chose cytosine", even though we know cytosine is a base, finding "alternative DNA" that used some other choice would not falsify your prediction, because that could just be another choice made by these nanotechnologists for different purposes (we know not what). Anyway, I suggest that's the shortest route out of the thicket here. What do you see as the conditions we must encounter to falsify each of the predictions you offer? eigenstate
@Genomicus,
Thus, it would make sense for us to propose a possible rational reason for why a nanotechnologist would choose cytosine as a base in the DNA molecule.
OK, perhaps that would be one of the working choices. Given what we know now, that seems to be a plausible choice, for sure. But "making sense as a choice" doesn't get you to a prediction, Genomicus. You can suppose such a thing to be the case, but you can't produce a scientific prediction from that. Try thinking about it this way: Why should we not suppose the nanotechnologist designers made a DIFFERENT choice than cytosine? If that's true, then the evidence we have in front of us would disconfirm your hypothesis! "Wait, but that's not what we see, so it must have been cytosine!", perhaps you'd like to retort. Well, that's the error, right there. You're working backwards, if so. Predictions are "forward". Looking at cytosine in biology and supposing that was the designer's choice is working "backward". What you are really saying here is THIS: 1. Cytosine is one of the four bases for DNA. 2. DNA was designed by nanotechnologists. 3. ERGO, nanotechnologists chose cytosine as one of the four bases for DNA. Can you see how you've got your "arrow" pointing the wrong direction? eigenstate
@Genomicus,
The front-loading hypothesis is a direct extension of the directed panspermia hypothesis proposed by Crick and Orgel in 1973. Read their paper, and perhaps you’ll have a better understanding of what’s going on here.
OK, thanks, am familiar. But there's nothing in that work to support the predictions you are making here, or predictions of this *type*. You must provide a model, a working framework of some kind for the front loading in order to produce predictions that necessarily proceed from it, and are novel.
Directed panspermia proposes that the earth was intentionally seeded with life forms from a distant location. It does not propose that the designers were present on earth constantly trying new things out. No, this was a one-shot deal (both under the directed panspermia and front-loading hypotheses).
I see where is this indicated or required? I agree that if you hypothesize that said aliens only visited Earth once and made precisely one "drop", then by definition, we should necessarily expect the effects of a single "drop" -- no precursors, for example. But that's an arbitrary story, no? If we mutate GFLT's narrative (and it's a narrative not a model!) and produce Eigenstate's Front Loading Theory (EFLT), where I take your narrative and just suppose our alien overlords were here for many thousands of years tinkering, and produced scads of precursors, now EFLT predicts precursors and prototypes, and GFLT doesn't. But NEITHER proceeds from a model! The precursors or no precursors are just restatements of the narrative. Work it the other way to see what I'm getting at perhaps. Begin with GFLT, and suppose we are subsequently confronted with all sorts of prescursor and prototypes in the evidential record. No problem, tweak GFTL into EFLT, and boom! no you have got the predictions you needed to support precursors. This is the "oracle" problem, where the hypothesis is not constrained, and is not a concrete model, and so becomes an oracle that retroactively predict anything at all we should find. Did we find lots of monophosphate rich precursor components? Ahh, well, we can just adapt our "front loading hypothesis" to accommodate an alien research program that provides for that! QED! Right? This is where the semantics of 'hypothesis' become important. It's not just a casual conjecture, a narrative we might entertain. It provides some warrant for its model features, and then produces predictions that flow from it necessarily. We can change our hypothesis, but if the hypothesis is plastic enough to confess any evidence we might encounter retrospectively, we've not got anything to work with epistemically. You are just just dealing in just-so stories and tautologies. "The aliens set up front loading so that we would find [insert your favorite observation from reality here]", doesn't ground a scientific prediction. By definition, it must have been "so that we would find...."
Further, the engineers were rational agents, thus the initial life forms on earth contained a genetic code virtually identical to the universal optimal code on earth today. No room for precursor genetic codes.
That doesn't follow AT ALL. Why should we think that a "rational agent" would provide a genetic code that is "optimal" (nevermind the problem of what 'optimal' means -- I've asked you elsewhere)? I don't see any problem with identifying that as a possibility. But it's not an entailment. You don't have what you must have to produce a scientific prediction, here, Genomicus. It can't be 'this is one possibility'. It may be a possibility, a plausible scenario. But if it might have been any number of other ways, under the same set of assumptions you are making, then the witness of the evidence can't help you with your hypothesis. It can't be confirmed or disconfirmed by the evidence, because the hypothesis doesn't dictate what must necessarily be found, if the hypothesis is true. eigenstate
@junkdnaforlife,
eig: What is your current hypothesis for the origin of the genetic code?
I don't have a hypothesis to propose beyond the ideas current with researchers in abiogenesis -- RNA world as a precursor context, etc.
Why are the elements represented by the values, a,t,g,c selected from a hypothetical matrix and assigned values, i.e. functional strings in the genetic code, but the vast majority of combinations are functionless strings?
Selection, of course. Functional configurations persist and reproduce better than non-functional configurations. So where you have some "bootstrapping configuration" that replicates, you have function at LEAST insofar as replication is enabled. If you just think about configurations that maintain and support replication, you can see that you will necessarily, over time, have organism around that feature replication functionality. That's a truism. The organisms that DON'T have replicative functionality die out, so they are not around to consider!
What is the difference between you assigning a specific value to a variable in an algorithm, and the specific assignment of functional sequence strings in the genetic code?
In C, for example, I might write a statement like this: int x = 4; The "int x" declares storage space for a 16 bit number, and the "=4" tells the compiler to generate the bit pattern for 4 ("0000000000001100") and copy it to the location for the variable "x" we just allocated. I can get you that far, but don't know what you mean by "specific assignment of functional sequence strings in the genetic code". Give me an example of such an example, and I'll see what that looks like against my "x=4" in a computing environment. What is a "specific" assignment as opposed to a "non-specific" assignment? eigenstate
@Genomicus#8.1,
By “quite complex” I mean more complex than would be assumed under a non-teleological framework.
I don't think that helps. How complex would LUCA be assumed to be under a non-teleological framework? The only non-teleological framework I'm familiar with that IS a framework (model) would be evolutionary theory, and I don't know what the complexity of LUCA would be in that case, or even what you mean by complexity for any LUCA. Are you talking about complexity in terms of K-C theory against its gene sequences? You don't need to use LUCA for your answer, pick a bacteria or some such -- something -- and show me how you would calculate complexity, and that would be a great start. More problematic though, is that you say this in your prediction:
Biological complexity. Front-loading predicts that the last universal common ancestor (LUCA) was quite complex, complete with genes necessary for the origin and development of metazoan life forms.
Evolutionary theory produces the same tautology (it's not a prediction, yet, so far as I can see). Per evolution, LUCA was, BY DEFINITION, complete with all the genes necessary for the origin and development of metazoan life forms. Similarly, Genomicus-Front-Loading-Theory (GFLT) says that LUCA, BY DEFINITION, was complete with the genes necessary for the origin and development of metazoan life forms. On both hypotheses, here we are, after all! I know I'm having trouble getting through to you on this, but maybe this response makes headway regarding tautologies? You've not differentiated your GFLT here from evolutionary theory -- both suppose that LUCA had all the right stuff for the later development of the diverse forms that came later. Evolution uses a different (I think, I'll though I'm still not sure, given what you've said) mechanism than GFLT, relying on undirected and stochastic inputs for variations that get filtered and accumulated (or not) based on the natural selection, but in both cases, LUCA was complete with everything it needed for the future. This must be true, right? Else, LUCA wouldn't be LUCA! The term LUCA implies the very very thing you suppose you are prediction, in the definition of the term. Back to complexity, if you can quantify the complexity entailed by GFLT for LUCA, maybe we're getting somewhere. It's perhaps kosher in principle, then, as a prediction -- if you say LUCA had complexity greater than X (where complex is defined in concrete terms) -- but even if that is epistemically valid as a prediction, it seems practically impossible to test your prediction. We have no LUCA to test, and can't expect to. If we are able to "extrapolate back", there's a problem as that extrapolation would rely on rules for extrapolation that nullify your prediction; you have to use the theory to do the extrapolation! Nevertheless, if you can give me something on calculating the complexity of LUCA, or any organism or genome, that's somewhere to start. On the entailment thing, this is a problem. Maybe I can go find some links or other books or resources to point you to that will succeed where I've failed here, but clearly, the nature of "entailment" generally, and the use of entailment for scientific predictions that proceed from your models specifically, has escaped you. If we can get over that hurdle, then we're golden, at least for the next step in looking at this. eigenstate
Oh what the heck, I'll give you one more clue; Non-Local Quantum Entanglement In Photosynthesis http://vimeo.com/30235178 bornagain77
Why is “quite complex” entailed by your hypothesis? What qualifies as “quite complex” and how do you measure complexity?
By "quite complex" I mean more complex than would be assumed under a non-teleological framework. Why is this entailed in the hypothesis? Precisely because front-loading requires that the initial life forms contained advanced machinery etc. to terraform the earth and front-load future states. Genomicus
Another little clue for you champ; The ATP Synthase Enzyme – exquisite motor necessary for first life – video http://www.youtube.com/watch?v=W3KxU63gcF4 bornagain77
And Champ so in your simplistic view of reality, as long as pour energy into a open system then everything is hunky dory in your book as far as the generation of functional prescriptive information??? Perhaps you should think just a little more carefully about what you are actually claiming! Evolution Vs. Thermodynamics – Open System Refutation – Thomas Kindell – video http://www.metacafe.com/watch/4143014 Please give me the exact proportional equivalence to the amount of function information I can expect in a open system from the amount of energy I pour into it! :) I will give you a clue, the more raw energy you pour into a system the more you will destroy the functional information in that open system! bornagain77
If this designer/front-loader were producing prototypes, tests (as I would if I were going to do some “virtual engineering” in a software context simulating biology), that would leave prototypes and precursors behind, no?
The front-loading hypothesis is a direct extension of the directed panspermia hypothesis proposed by Crick and Orgel in 1973. Read their paper, and perhaps you'll have a better understanding of what's going on here. Directed panspermia proposes that the earth was intentionally seeded with life forms from a distant location. It does not propose that the designers were present on earth constantly trying new things out. No, this was a one-shot deal (both under the directed panspermia and front-loading hypotheses). Further, the engineers were rational agents, thus the initial life forms on earth contained a genetic code virtually identical to the universal optimal code on earth today. No room for precursor genetic codes. Genomicus
The fail point here in this item is “so why would a front-loader choose cytosine as a base in DNA?”. It’s not sufficient to offer us *a* reason why you think cytosine would be chosen (and this is particularly devastating if you are offering this putative prediction in the context of an “intelligent design” explanation, an explanation with an unknown, inscrutable, mysterious designer). The choice must follow NECESSARILY from the hypothesis.
Well, actually, unlike many other ID concepts, the front-loading hypothesis does posit that the designer(s) of the first genomes on earth were advanced nanotechnologists who were rational agents. Thus, it would make sense for us to propose a possible rational reason for why a nanotechnologist would choose cytosine as a base in the DNA molecule. Given that C --> T transitions lead to an increase in hydrophibicity, this is a clue that a designer chose this base precisely because of it is prone to deamination. So, we offer this tentative prediction from a front-loading perspective. If it was indeed discovered that cytosine deamination has played a key role in the course of evolution, then this would be a nice chunk of evidence for the front-loading hypothesis. Genomicus
Just wanted to thank you for these posts, Genomicus. I haven't read them all thoroughly yet, but they are bookmarked. Well, done :) Elizabeth Liddle
Hi Collin,
I would like to hear your explanation. What I was told in college was that evolution is not a violation of the 2nd law because earth is constantly bathed in high ordered energy. So earth has the resources to slowly evolve life utilizing that high ordered energy.
That's basically right. Very loosely speaking, the second law says that in an isolated system, disorder (aka entropy) must either stay the same or increase. However, the earth is not an isolated system. It receives order in the form of sunlight, and this order gets used to grow plants, create fossil fuels, drive the weather, and so on. It also drives the process of evolution. The second law allows the earth's disorder to decrease, but only if the disorder of earth's surroundings increases by an equal or greater amount. The sun is part of the earth's surroundings. In the process of creating sunlight by hydrogen fusion, the sun increases its disorder (entropy) by a huge amount that is more than enough to offset the decrease in disorder (entropy) on earth due to received sunlight. Thus the second law is not violated by the growing of plants, the formation of storms, or evolution. (Conversely, if evolution did violate the second law, then so would the growth of plants. The second law would be violated 24 hours a day, in which case it wouldn't be a law at all! So if you hear someone like BA77 claiming that evolution violates the second law, ask him if he believes that plants do also.)
What causes me to doubt is that many celestial objects are bathed in high ordered energy but don’t seem to exhibit anything near as complex and interconnected as life. Indeed, ultraviolet radiation seems to destroy life rather than foster it despite ultraviolet radiation being high ordered energy. Why don’t we see life or something just as complex on mercury, venus mars or elsewhere (so far)?
The key point is that the second law doesn't guarantee that interesting things will happen if sunlight enters a system. It allows interesting things to happen, but it doesn't require them. If you put a slab of granite out in the sun, it will warm up, but not much else will happen. With regard to evolution, the one and only thing the second law tells you is that you will not see evolution on earth unless there is a compensating increase in the disorder (entropy) of earth's surroundings. That's it. It doesn't guarantee that you will see evolution if those conditions are met, but it doesn't rule out evolution under those circumstances either. This is where Granville Sewell gets confused. He writes things like this:
It is commonly argued that the spectacular increase in order which has occurred on Earth does not violate the second law of thermodynamics because the Earth is an open system...
That part is true. He continues:
...and anything can happen in an open system as long as the entropy increases outside the system compensate the entropy decreases inside the system.
That part is ridiculous. Scientists don't claim that anything at all can happen in an open system as long as those conditions are met. They know that's not true, and they would never make such an idiotic claim. They only claim that the second law itself doesn't forbid things from happening, as long as those conditions are met. There may very well be other reasons why something can't happen, but as long as those conditions are met, the second law itself doesn't forbid it. The second law only forbids violations of the second law, and nothing else. Granville continues:
Thus, unless we are willing to argue that the influx of solar energy into the Earth makes the appearance of spaceships, computers and the Internet not extremely improbable, we have to conclude that the second law has in fact been violated here.
Granville has forgotten the key fact about the second law: The second law forbids violations of the second law, and nothing else. The appearance of spaceships, computers and the Internet on earth does not violate the second law as long as the entropy of earth's surroundings increases by a sufficient amount. Granville clearly doubts that evolution spontaneously happened on earth. He believes that spaceships, computers and the Internet would not have appeared if God hadn't intervened to bring humans into existence. But these doubts have nothing to do with the second law, because the second law by itself doesn't forbid these things from happening. Granville is trying to wrap his personal incredulity in the mantle of the second law, but it doesn't work, because the second law only forbids violations of the second law, and nothing else. champignon
eig: What is your current hypothesis for the origin of the genetic code? Why are the elements represented by the values, a,t,g,c selected from a hypothetical matrix and assigned values, i.e. functional strings in the genetic code, but the vast majority of combinations are functionless strings? What is the difference between you assigning a specific value to a variable in an algorithm, and the specific assignment of functional sequence strings in the genetic code? junkdnaforlife
ba77 from 2.1.1.1.25:
And thus, once again, since empirical evidence has final say in the scientific method, then it is on the one who contests my claim to demonstrate, empirically, that it is otherwise!
Yep, empiricism is everything. And the 2nd is just math. Empirically, thus far, it has held; as long as you ignore issues of decay products in particle physics (outside the error bars) and the postulates about virtual photons (conjuration for the math). But anywhere else you're not going to lose money betting on the side of the 2nd unless someone rigs a bar-bet against you. But the 2nd does not disbar the standard narrative of Darwinism; by its own math. There are other reasons to object to things, but the 2nd isn't one of them. Specifically, any prophecy about the past that acknowledges that the sun spews things this-a-way cannot be in violation. At least not prima facia.
i.e. more to the point, why is the assumption that randomness can possibly generate functional information, though no one has ever seen this (Abel) given precedence over the fact that randomness, as far as the evidence can tell us, consistently destroys functional information???
Religious mythology. That's the short answer. The statement that there cannot be islands of apparent violation is religious mythology also. Both of which have nothing to do with inspecting and characterizing a chaotic system. Hint: You can't do it with exponential functions and fractional children unless there's a lot of sawmill accidents involved. Maus
@Genomicus
3. Biological complexity. Front-loading predicts that the last universal common ancestor (LUCA) was quite complex, complete with genes necessary for the origin and development of metazoan life forms.
Why is "quite complex" entailed by your hypothesis? What qualifies as "quite complex" and how do you measure complexity? I read back a bit, but didn't see anywhere you laid out your terms. If you have a link to where you have provided the semantics and measurements for complexity your using, I can just read that, thanks. As you've stated, I don't think this prediction will provide any possible lift for your hypothesis, no matter what the evidence is. This is yet another entailment fail (with a kicker of undefined measures). It's good to point to a solid example by way of comparison. Einstein's GR proposal necessarily predicted the precession of the perihelion of Mercury. It could not have been otherwise, per Einstein's proposed model. That is the key linkage you are missing in all of your prediction proposals, so far, from what I've seen. GR also entails the observation of redshift in electomagnetic radiation traversing areas of gravitational distortion (e.g. stars, planets). It's not a "could be", but a "must be" given GR's model. eigenstate
2. The genetic code. The front-loading hypothesis proposes that the universal optimal genetic code was present at the dawn of life: in other words, we won’t find precursors of sub-optimal genetic codes, because the genetic code was optimal from the start. Further, the front-loading hypothesis predicts that all 20 amino acids would have been used in the first life forms, and that the transcription, translation, and proof-reading machinery would have all been present at the start of life on earth.
If this designer/front-loader were producing prototypes, tests (as I would if I were going to do some "virtual engineering" in a software context simulating biology), that would leave prototypes and precursors behind, no? If not, then your hypothesis asserts that the designer/front-loader made all the front loading happen in a "single pass". That way, you would have an entailed prediction -- no prototypes or precursors should be found by the "single pass" designer/front loader. But that just pushes back your problem: what is it in your model that indicates "single pass" vs. "iterative tinkering"? Even so, this is better in the sense that you have (if you are explicit about "single pass" front-loading, or "always perfect front-loading", never mind why that would be for now) entailment. More pressing, though: what do you mean by "optimal", here? How would we test genetic codes to see if they are optimal? I can't get a handle on what it MEANS in this context, let alone how you would establish or measure that empirically, How do you do that? eigenstate
@Genomicus
Cytosine deamination. Of the three bases in DNA (adenine, guanine, and cytosine) that are prone to deamination, cytosine is the most likely to undergo deamination. This ultimately results in a C –> T transition. Cytosine deamination often causes severe genetic diseases in humans, so why would a front-loader choose cytosine as a base in DNA? It has been observed that C –> T transitions result in a pool of strongly hydrophobic amino acids, which leads to the following prediction from a front-loading perspective: a designer would have chosen cytosine because it would facilitate front-loading in that mutations could be channeled in the direction of increased hydrophobicity. This prediction would be confirmed if key protein sequences in metazoan life forms were the result of numerous C –> T transitions. It's peculiar that you appear to understand concepts like "deamination", using them in context, sensibly, etc., and yet the concept of predictions produced by scientific hypotheses and models seems totally foreign to you. The fail point here in this item is "so why would a front-loader choose cytosine as a base in DNA?". It's not sufficient to offer us *a* reason why you think cytosine would be chosen (and this is particularly devastating if you are offering this putative prediction in the context of an "intelligent design" explanation, an explanation with an unknown, inscrutable, mysterious designer). The choice must follow NECESSARILY from the hypothesis. You are quite conspicuously working backwards from your conclusion. Coming up with a plausible choice -- and given an unspecified, unknown, potentially omniscient and omnipotent designer, ALL choices are plausible -- does not ground a prediction. First you lay out the hypothesis, the proposed mechanism, and then you deduce from that NECESSARY implications that proceed from that. If you can affirm what is entailed from your model, you got something! Sometimes those predictions are trivial or banal, and so don't carry much weight. Other times they just don't distinguish the hypothesis from other, competing hypotheses. But in this case, if you COULD establish that such a choice was ENTAILED from your proposed model, that would be quite substantial, indeed, I think. As it is, though, it's a miss.
eigenstate
vjtorley: Thanks for your comment, and I hope I can write up some more articles in the future. Also a huge thanks again to kairo for giving me this opportunity. Genomicus
Starbucks:
I think there is one key difference between your perspective on front-loading, and Mike Gene’s. That is, in my opinion, that Mike Gene doesn’t really concern himself with the question of how “How could sophisticated molecular systems be front-loaded?” For him it’s not a matter of what can or can’t happen, but what did happen. As such I don’t think he allows ID type complexity arguments into his thinking on FLE, unless I’m mistaken.
I have a feeling you're misunderstanding my position. When I pose the question "how were sophisticated molecular systems front-loaded," I'm using that as a question we can ask once we have some pretty good evidence that a molecular system was indeed front-loaded, instead of purely the product of the blind watchmaker. Genomicus
That sounds like an impressive spec, but what with horizontal transfer, it becomes problematic defining what you mean by LUCA.
Please elaborate on why horizontal gene transfer makes defining LUCA a problem. Thanks.
The first metazoans would have most of the genes found in modern ones, since they were invented by microbes.
Yes, but this prediction involves going back to the very first life form, the ancestor of all living organisms. At the moment, it's quite difficult to determine just what its genome was like, so this is a prediction for future years. The FL model predicts that this common ancestor to all life forms was quite complex, complete with the universal optimal genetic code, etc.
So OOL research is relevant or not, assuming it eventually supports a bootstrap scenario?
OOL research is relevant to the discussion, of course. Not sure where you're going with this line of thought. Genomicus
Genomicus, Thanks very much for a very stimulating paper. I look forward to reading more papers from you in the future. Thanks again. vjtorley
KF, Genomicus, thank you for this fascinating post. It gets my imagination going. I think that this a hypothesis is ripe for experimentation. Could it be shown that there were chicken genes in a dinosaur ready to be expressed? Or can the platypus be shown to be a library for future species? Collin
champignon, I would like to hear your explanation. What I was told in college was that evolution is not a violation of the 2nd law because earth is constantly bathed in high ordered energy. So earth has the resources to slowly evolve life utilizing that high ordered energy. What causes me to doubt is that many celestial objects are bathed in high ordered energy but don't seem to exhibit anything near as complex and interconnected as life. Indeed, ultraviolet radiation seems to destroy life rather than foster it despite ultraviolet radiation being high ordered energy. Why don't we see life or something just as complex on mercury, venus mars or elsewhere (so far)? Collin
Maus, as far as I can see, a demonstrated gain in functional prescriptive, information, not merely shannon information, by neo-Darwinian processes would be, for all practical purposes of empirical science, the same thing as a violation of the second law:
“Is there a real connection between entropy in physics and the entropy of information? ….The equations of information theory and the second law are the same, suggesting that the idea of entropy is something fundamental…” Siegfried, Dallas Morning News, 5/14/90, [Quotes Robert W. Lucky, Ex. Director of Research, AT&T, Bell Laboratories & John A. Wheeler, of Princeton & Univ. of TX, Austin] “Bertalanffy (1968) called the relation between irreversible thermodynamics and information theory one of the most fundamental unsolved problems in biology.” Charles J. Smith – Biosystems, Vol.1, p259. “Gain in entropy always means loss of information, and nothing more.” Gilbert Newton Lewis – preeminent Chemist of the first half of last century Klimontovich’s S-theorem, an analogue of Boltzmann’s entropy for open systems, explains why the further an open system gets from the equilibrium, the less entropy becomes. So entropy-wise, in open systems there is nothing wrong about the Second Law. S-theorem demonstrates that spontaneous emergence of regular structures in a continuum is possible.,,, The hard bit though is emergence of cybernetic control (which is assumed by self-organisation theories and which has not been observed anywhere yet). In contrast to the assumptions, observations suggest that between Regularity and Cybernetic Systems there is a vast Cut which cannot be crossed spontaneously. In practice, it can be crossed by intelligent integration and guidance of systems through a sequence of states towards better utility. No observations exist that would warrant a guess that apart from intelligence it can be done by anything else. Eugene S – UD Blogger
You see maus, I consider not only equilibrium, but also I consider the entropy as defined by the randomness that may be inherent in a system,,,
Thermodynamics – 3.1 Entropy Excerpt: Entropy – A measure of the amount of randomness or disorder in a system. http://www.saskschools.ca/curr_content/chem30_05/1_energy/energy3_1.htm
,,, to be completely insufficient to generate functional information. As Eugene S pointed out, it is simply an assumption that 'S-theorem demonstrates that spontaneous emergence of regular structures in a continuum is possible'. An assumption that, as Eugene S pointed out again, has not one shred of empirical support that natural processes can 'spontaneously' generate cybernetic systems as such. Whereas I hold that the correspondence of the equations of Entropy and Information are so tightly correlated, and that functional, prescriptive, information is so distinct from mere Shannon information, That indeed the more randomness a system has, then the more propensity that that random system, (no matter how many isolated pockets are away from equilibrium in that system), will have to destroy the functional, prescriptive, information within that system. Thus, though you may say in purely theoretical mathematical honesty, it is possible for neo-Darwinian evolution to occur without violating the second law, I may also equally hold against you, from a purely empirical standpoint, since indeed empirical evidence has the final word in the scientific method, that to find 'natural' material processes generating functional, prescriptive, information would be, for all practical purposes, equivalent to breaking the second law. You may say that I am not being true to the math, yet I would equally hold that the math is not fully developed yet to the point of a rightful consideration and inclusion of functional, prescriptive, information. And thus, once again, since empirical evidence has final say in the scientific method, then it is on the one who contests my claim to demonstrate, empirically, that it is otherwise! i.e. more to the point, why is the assumption that randomness can possibly generate functional information, though no one has ever seen this (Abel) given precedence over the fact that randomness, as far as the evidence can tell us, consistently destroys functional information??? Of note:
Blackholes - The neo-Darwinian ‘god of entropic randomness’ which can create all things (at least according to them) https://docs.google.com/document/d/1fxhJEGNeEQ_sn4ngQWmeBt1YuyOs8AQcUrzBRo7wISw/edit?hl=en_US
bornagain77
ba77:
You state that second law principles are not violated by ‘work’, perhaps not violated by ‘work’, I never said work did violate it, but OOL and neo-Darwinian evolution is certainly a flagrant violation of the second law.
I've said it before and I'll say it again: Do not do this. There is no violation of the 2nd by having localized areas out of equilibrium. With any chaotic system involved it is not simply not a violation, it is an expectation. See: Weather forecasting. If it helps: Your error comes in when trying to apply 'work', I assume qua Carnot, to a system defined as the entire universe. The 2nd only requires that photons zinging off this way and that go happily off to nowhere on a very long trip at the end of the day. It doesn't require that their disallowed from making bank shots for a while. Maus
Well Champ, apparently you are too smart to bother with empirical evidence. It must be nice to not ever have to demonstrate what you claim to be true is actually true:
"Is there a real connection between entropy in physics and the entropy of information? ....The equations of information theory and the second law are the same, suggesting that the idea of entropy is something fundamental..." Siegfried, Dallas Morning News, 5/14/90, [Quotes Robert W. Lucky, Ex. Director of Research, AT&T, Bell Laboratories & John A. Wheeler, of Princeton & Univ. of TX, Austin] “Bertalanffy (1968) called the relation between irreversible thermodynamics and information theory one of the most fundamental unsolved problems in biology.” Charles J. Smith – Biosystems, Vol.1, p259. "Gain in entropy always means loss of information, and nothing more." Gilbert Newton Lewis - preeminent Chemist of the first half of last century Klimontovich’s S-theorem, an analogue of Boltzmann’s entropy for open systems, explains why the further an open system gets from the equilibrium, the less entropy becomes. So entropy-wise, in open systems there is nothing wrong about the Second Law. S-theorem demonstrates that spontaneous emergence of regular structures in a continuum is possible.,,, The hard bit though is emergence of cybernetic control (which is assumed by self-organisation theories and which has not been observed anywhere yet). In contrast to the assumptions, observations suggest that between Regularity and Cybernetic Systems there is a vast Cut which cannot be crossed spontaneously. In practice, it can be crossed by intelligent integration and guidance of systems through a sequence of states towards better utility. No observations exist that would warrant a guess that apart from intelligence it can be done by anything else. Eugene S - UD Blogger
bornagain77
Sorry, BA, but I just don't think it's worth the effort to try to explain to you why evolution and the second law are perfectly compatible. Perhaps you could ask one of your fellow ID supporters for an explanation. But if someone else -- even Granville himself -- asks me to explain what's wrong with his arguments, I will, and you're welcome to listen in. champignon
Get back to me after you’ve read and understood the Thornton paper that inspired this comment.
I'm not sure you understand it yourself. (Did you think that you were the first to post it?) What you call the 'gain of function' is the addition of a protein to a component which adds no known functionality at all. Thornton himself expressed that it runs counter to the expectations of natural selection because it has no apparent benefit. This is about as exciting as a four-leaf clover. I don't know which is more astonishing - the degree of hyperbole or that anyone at all buys it. Apparently people go into a trance when they see the words "evolution" and "complexity." When describing the paper you used the exact words "net gain of function." And yet Thornton explicitly states that it demonstrates an increase in complexity "without the apparent evolution of novel functions." And you're asking me if I've read it? The problem isn't the paper. The sales job is disgraceful. It's insulting. ScottAndrews2
You don’t find it the least bit odd to propose that “the accumulation of simple, degenerative changes over long periods of times could have created many of the complex molecular machines present in organisms today?”
Get back to me after you've read and understood the Thornton paper that inspired this comment. Be sure you understand why Thornton calls it counterintuitive and why he proposes it anyway. I am aware that Behe has made a habit of mocking mutations as loss of function, so be prepared to defend the position that loss of function can't lead to a net gain of function. This has always been a key point in the ID quiver, so I expect to see a vigorous defense. Just as an aside, when I wrote my word evolving game, I noticed it got stuck on local hills of function, just as folks predicted. So I flip a coin every few generations and kill off the most fit individual. The net effect is the ability to navigate over valleys. It's just a toy, but it amuses me. I'm amused that a blind process that decreases function can in itself be functional. Petrushka
Well Champ, perhaps you can humble yourself and condescend to humor ignorant ole me on the second law by actually demonstrating the generation of functional complexity/information over and above what was already present. How about passing the fitness test??? that should be easy enough for a process claimed to have built the undreamt complexity of all life on earth with no help from a Mind whatsoever???;
Is Antibiotic Resistance evidence for evolution? - 'The Fitness Test' - video http://www.metacafe.com/watch/3995248 Thank Goodness the NCSE Is Wrong: Fitness Costs Are Important to Evolutionary Microbiology Excerpt: it (an antibiotic resistant bacterium) reproduces slower than it did before it was changed. This effect is widely recognized, and is called the fitness cost of antibiotic resistance. It is the existence of these costs and other examples of the limits of evolution that call into question the neo-Darwinian story of macroevolution. http://www.evolutionnews.org/2010/03/thank_goodness_the_ncse_is_wro.html
or how about falsifying this null hypothesis?:
Three subsets of sequence complexity and their relevance to biopolymeric information - Abel, Trevors Excerpt: Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC).,,, Testable hypotheses about FSC What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses: Null hypothesis #1 Stochastic ensembles of physical units cannot program algorithmic/cybernetic function. Null hypothesis #2 Dynamically-ordered sequences of individual physical units (physicality patterned by natural law causation) cannot program algorithmic/cybernetic function. Null hypothesis #3 Statistically weighted means (e.g., increased availability of certain units in the polymerization environment) giving rise to patterned (compressible) sequences of units cannot program algorithmic/cybernetic function. Null hypothesis #4 Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time. We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. This renders each of these hypotheses scientifically testable. We offer the prediction that none of these four hypotheses will be falsified. http://www.tbiomed.com/content/2/1/29 Is Life Unique? David L. Abel - January 2012 Concluding Statement: The scientific method itself cannot be reduced to mass and energy. Neither can language, translation, coding and decoding, mathematics, logic theory, programming, symbol systems, the integration of circuits, computation, categorizations, results tabulation, the drawing and discussion of conclusions. The prevailing Kuhnian paradigm rut of philosophic physicalism is obstructing scientific progress, biology in particular. There is more to life than chemistry. All known life is cybernetic. Control is choice-contingent and formal, not physicodynamic. http://www.mdpi.com/2075-1729/2/1/106/ "Nonphysical formalism not only describes, but preceded physicality and the Big Bang Formalism prescribed, organized and continues to govern physicodynamics." http://www.mdpi.com/2075-1729/2/1/106/ag The Law of Physicodynamic Insufficiency - Dr David L. Abel - November 2010 Excerpt: “If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise.”,,, After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided. The time has come to extend this null hypothesis into a formal scientific prediction: “No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone.” http://www-qa.scitopics.com/The_Law_of_Physicodynamic_Insufficiency.html The Law of Physicodynamic Incompleteness - David L. Abel - August 2011 Summary: “The Law of Physicodynamic Incompleteness” states that inanimate physicodynamics is completely inadequate to generate, or even explain, the mathematical nature of physical interactions (the purely formal laws of physics and chemistry). The Law further states that physicodynamic factors cannot cause formal processes and procedures leading to sophisticated function. Chance and necessity alone cannot steer, program or optimize algorithmic/computational success to provide desired non-trivial utility. http://www.scitopics.com/The_Law_of_Physicodynamic_Incompleteness.html
Or how about proving the 'First Rule' is wrong?:
“The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain - Michael Behe - December 2010 Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain.(that is a net 'fitness gain' within a 'stressed' environment i.e. remove the stress from the environment and the parent strain is always more 'fit') http://behe.uncommondescent.com/2010/12/the-first-rule-of-adaptive-evolution/
You see champ it doesn't matter what you claim is true in science, with big 25 cent words, it matters what you can actually demonstrate to be true in science!!! which reminds me of this demonstration in science that you apparently are to smart to pay attention to:
Falsification Of Neo-Darwinism by Quantum Entanglement/Information https://docs.google.com/document/d/1p8AQgqFqiRQwyaF8t1_CKTPQ9duN8FHU9-pV4oBDOVs/edit?hl=en_US
bornagain77
You don't find it the least bit odd to propose that "the accumulation of simple, degenerative changes over long periods of times could have created many of the complex molecular machines present in organisms today?" Don't get me wrong. It works. I can create a functional Lexus with 4/70 air conditioning if you give me a Lexus and a sledgehammer. I don't want to explain what's wrong with this. At this point I'm really wondering if you can see what's wrong with this without me telling you. From my standpoint this is no longer a debate. It's a curiosity. It's like something you pay $.25 to see even though you know it might not be real. ScottAndrews2
Well pet, you believe in degenerative possesses, over long periods of time, that can build undreamt of complexity in cells? Perhaps you leave you door open on your house for a year and it will build itself into a Castle! :) If you ever decide to come back to the real world, there are some notes in this video description for you to chew on; The Digital Code of DNA and the Unimagined Complexity of a 'Simple' Bacteria - Rabbi Moshe Averick - video (notes on unimagined complexity of DNA and 'simple' life in video description) http://vimeo.com/35730736 bornagain77
BA77:
You claim, with a lot of rhetoric, that it [the 2nd law] can be violated, Fine show me!
No, BA, I don't claim that the 2nd law can be violated. Slow down and think, man! You and Granville are the ones who say that evolution violates the 2nd law, not me. And I'll repeat my offer. If there is anyone out there besides you and Granville who actually believes that evolution violates the 2nd law, and if they post a comment indicating this, I'll explain why Granville is wrong. Otherwise, it's not worth the effort. champignon
Moreover, the harder they are pressed the more convoluted they lie!
Have you ever noticed that when you are angry you engage in name calling? Petrushka
It’ll be like Cope and Marsh more like Laurel and Hardy!!! :) bornagain77
(I’m not commenting on it. I’m just quoting it.)
No need to comment when one instance has been demonstrated and every young gun in the west will be trying to top it. It'll be like Cope and Marsh. Petrushka
Well Pet, I have no curriculum vitae, in fact I am nothing and nobody in particular except a sinner saved by the grace of God. I do have just a few years of watching neo-Darwinists lie through their teeth on these blogs and never proving anything that they claim. Moreover, the harder they are pressed the more convoluted they lie! Perhaps it doesn't bother you that supposed scientists could be so deceptive in what they tell the public, but it bugs the crap out of me that this should be so! Intelligent Design - The Anthropic Hypothesis http://lettherebelight-77.blogspot.com/2009/10/intelligent-design-anthropic-hypothesis_19.html bornagain77
Which do you think will happen first: the ID movement produces a theory of design, or someone like Szostak figures out how it can bootstrap?
Szostak is trying to engineer life. So heads I win, tails you lose. ScottAndrews2
,,,All I can say is that if you believe that, You guys are insane!
Have you ever noticed,,,,,,,, that when you get angry;;;; you tend!!!!! to insert superfluous,,,, punctuation marks??????????????????? Personally, I just tend to overlook spelling mistakes. To each his own. Petrushka
Thornton proposes that the accumulation of simple, degenerative changes over long periods of times could have created many of the complex molecular machines present in organisms today.
(I'm not commenting on it. I'm just quoting it.) ScottAndrews2
In contrast to the assumptions, observations suggest that between Regularity and Cybernetic Systems there is a vast Cut which cannot be crossed spontaneously.
Nor by design, in the absence of a theory of design. So I await a theory of how the designer did it, so we can see which process best fits the data. Which do you think will happen first: the ID movement produces a theory of design, or someone like Szostak figures out how it can bootstrap? Petrushka
This is cutting edge research???
Well yes. Perhaps you'll grace us with your curriculum vitae, so we can compare. Petrushka
pet you quote this; 'Thornton proposes that the accumulation of simple, degenerative changes over long periods of times could have created many of the complex molecular machines present in organisms today.' ,,,All I can say is that if you believe that, You guys are insane! bornagain77
look in the mirror champ! You are the one claiming that processes which overwhelmingly and relentless degrade can be build complexity the likes of which man has never dreamed of in his most advanced machines. Moreover, evolution is the only 'science' that is at complete variance with this law. You claim, with a lot of rhetoric, that it can be violated, Fine show me! Produce a molecular machine by neo-Darwinian processes and I will believe you! “There are no detailed Darwinian accounts for the evolution of any fundamental biochemical or cellular system only a variety of wishful speculations. It is remarkable that Darwinism is accepted as a satisfactory explanation of such a vast subject.” James Shapiro – Molecular Biologist Perhaps you have an example of one these following molecular machines being ‘self designed’ by a cell? Bacterial Flagellum – A Sheer Wonder Of Intelligent Design – video http://www.metacafe.com/watch/3994630 The ATP Synthase Enzyme – exquisite motor necessary for first life – video http://www.youtube.com/watch?v=W3KxU63gcF4 Powering the Cell: Mitochondria – video http://www.youtube.com/watch?v=RrS2uROUjK4 Molecular Machine – Nuclear Pore Complex – Stephen C. Meyer – video http://www.metacafe.com/watch/4261990 Programming of Life – Protein Synthesis – video http://www.youtube.com/user/Pr.....5Z3afBdxB0 Kinesin Linear Motor – Video http://www.youtube.com/watch?v=kOeJwQ0OXc4 DNA – Replication, Wrapping & Mitosis http://vimeo.com/33882804 or if you can’t find an example of ‘self design’ for one of those molecular machines there are several more here that you can look for examples for: The following article has a list of 40 (yes, 40) irreducibly complex molecular machines in the cell: Molecular Machines in the Cell - http://www.discovery.org/a/14791 and after you get done producing any evidence whatsoever that cells can ‘self design’ any molecular machine from scratch, then you can work on refuting this falsification of neo-Darwinism: Falsification Of Neo-Darwinism by ‘non-local’ Quantum Entanglement/Information https://docs.google.com/document/d/1p8AQgqFqiRQwyaF8t1_CKTPQ9duN8FHU9-pV4oBDOVs/edit?hl=en_US bornagain77
One thought has struck me - you write "Front-loading predicts that the last universal common ancestor (LUCA) was quite complex, complete with genes necessary for the origin and development of metazoan life forms.", which made me wonder about complex non-metazoans (e.g. plants). Did the LUCA have all the necessary genes for them too? But then it struck me that there doesn't seem to be any necessity for a single LUCA - who or what ever seeded the earth might have seeded it with more than one organism, so there doesn't have to be a LUCA. This raises some questions about why the genetic code is so similar across organisms, but if the seeding was done by a single entity or group, this might not be so unreasonable. I do wonder - does the front-loading hypothesis would have anything to say about variation in the genetic code? Heinrich
More from Thornton: http://www.eurekalert.org/pub_releases/2012-01/uocm-eoc010512.php
"The mechanisms for this increase in complexity are incredibly simple, common occurrences," Thornton said. "Gene duplications happen frequently in cells, and it's easy for errors in copying to DNA to knock out a protein's ability to interact with certain partners. It's not as if evolution needed to happen upon some special combination of 100 mutations that created some complicated new function." Thornton proposes that the accumulation of simple, degenerative changes over long periods of times could have created many of the complex molecular machines present in organisms today. Such a mechanism argues against the intelligent design concept of "irreducible complexity," the claim that molecular machines are too complicated to have formed stepwise through evolution. "I expect that when more studies like this are done, a similar dynamic will be observed for the evolution of many molecular complexes," Thornton said.
Petrushka
Think, BA. champignon
champignon, You got a violation of the second law to show??? Well shoot man, if you actually have a violation of the second law why don't you build a perpetual motion machine??? LOL bornagain77
Petrushka, You are right. Klimontovich's S-theorem, an analogue of Boltzmann's entropy for open systems, explains why the further an open system gets from the equilibrium, the less entropy becomes. So entropy-wise, in open systems there is nothing wrong about the Second Law. S-theorem demonstrates that spontaneous emergence of regular structures in a continuum is possible. The hard bit though is emergence of cybernetic control (which is assumed by self-organisation theories and which has not been observed anywhere yet). In contrast to the assumptions, observations suggest that between Regularity and Cybernetic Systems there is a vast Cut which cannot be crossed spontaneously. In practice, it can be crossed by intelligent integration and guidance of systems through a sequence of states towards better utility. No observations exist that would warrant a guess that apart from intelligence it can be done by anything else. Eugene S
This is cutting edge research??? LOL you have got to be kidding me!!! They did not even empirically demonstrate anything, they merely conjectured a semi-plausible route for a very minor adjustment of a already existing system!!! If this is the 'best cutting edge research' thus far from neo-Darwinists, you guys are hopelessly lost in a fantasy world and are definitely not practicing science with any sort of rigor of integrity that I can see!!! bornagain77
Whoops, I messed up the formatting. Perhaps I can get a Thornton quote right.
"It's counterintuitive but simple: complexity increased because protein functions were lost, not gained," Thornton said. "Just as in society, complexity increases when individuals and institutions forget how to be generalists and come to depend on specialists with increasingly narrow capacities."
And having demonstrated that such an evolutionary sequence of mutations has taken place it would be interesting to recall this quote:
Therefore to the same natural effects we must, as far as possible, assign the same causes. and The qualities of bodies, which admit neither intensification nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.
Petrushka
BA77:
...OOL and neo-Darwinian evolution is certainly a flagrant violation of the second law.
Besides BA77 and Granville Sewell, is there anyone out there who actually believes this? I'm trying to gauge whether it's worth spending the time to post a refutation. champignon
It seems to have happened a total of one (that is, 1) time in the billion years since the divergence of fungi from other eukaryotes. So there is at least one chain requiring a duplication and two specific mutations, and the complaint is that it is only known to have happened once. Considering that Behe would have proclaimed it impossible to have happened at all, that's quite a few times. I believe Behe's stock response to any demonstration that a three step bit of evolution has happened is, "that's cool, now do it again." I'm not sure what you mean to imply by the complete package being easy. What the paper says is, "These losses were complementary, so both copies became obligate components with restricted spatial roles in the complex." This is cutting edge research. I'm sure as the technology becomes available there will be more scenarios outlined.
Petrushka
Front-loading predicts that the last universal common ancestor (LUCA) was quite complex, complete with genes necessary for the origin and development of metazoan life forms.
I'll repeat my question I asked in part (a) - how can you test this "testable prediction" without resort to a time machine? Heinrich
well pet, you accuse me of contorted reasoning when the detrimental mutation rate is shown to be so exceedingly high??? You don't even contest the fact man! You just sluff it off??? I guess if my reasoning is contorted that must make you stark raving mad! Go figure!!! ,,,In reply to a personal e-mail from myself, Dr. Cano commented on the 'Fitness Test' I had asked him about: Dr. Cano stated: "We performed such a test, a long time ago, using a panel of substrates (the old gram positive biolog panel) on B. sphaericus. From the results we surmised that the putative "ancient" B. sphaericus isolate was capable of utilizing a broader scope of substrates. Additionally, we looked at the fatty acid profile and here, again, the profiles were similar but more diverse in the amber isolate.": Fitness test which compared ancient bacteria to its modern day descendants, RJ Cano and MK Borucki Thus, the most solid evidence available for the most ancient DNA scientists are able to find does not support evolution happening on the molecular level of bacteria. In fact, according to the fitness test of Dr. Cano, the change witnessed in bacteria conforms to the exact opposite, Genetic Entropy; a loss of functional information/complexity, since fewer substrates and fatty acids are utilized by the modern strains. Considering the intricate level of protein machinery it takes to utilize individual molecules within a substrate, we are talking an impressive loss of protein complexity, and thus loss of functional information, from the ancient amber sealed bacteria. Here is a revisit to the video of the 'Fitness Test' that evolutionary processes have NEVER passed as for a demonstration of the generation of functional complexity/information above what was already present in a parent species bacteria: Is Antibiotic Resistance evidence for evolution? - 'Fitness Test' - video http://www.metacafe.com/watch/3995248 bornagain77
you then state 'Current laboratory experiments — Lenski, Thornton — support a functional space that can be connected.' And the real world says: Mutations : when benefits level off - June 2011 - (Lenski's e-coli after 50,000 generations) Excerpt: After having identified the first five beneficial mutations combined successively and spontaneously in the bacterial population, the scientists generated, from the ancestral bacterial strain, 32 mutant strains exhibiting all of the possible combinations of each of these five mutations. They then noted that the benefit linked to the simultaneous presence of five mutations was less than the sum of the individual benefits conferred by each mutation individually. http://www2.cnrs.fr/en/1867.htm?theme1=7 More from Lenski's Lab, Still Spinning Furiously - Behe - January 2012 Excerpt: So at the end of the day there was left the mutated bacteriophage lambda, still incompetent to invade most E. coli cells, plus mutated E. coli, now with broken genes which remove its ability to metabolize maltose and mannose. It seems Darwinian evolution took a little step sideways and two big steps backwards. http://www.evolutionnews.org/2012/01/more_from_lensk055751.html A Blind Man Carrying a Legless Man Can Safely Cross the Street - Michael J. Behe - January 2012 Excerpt: Finnegan et al’s (2012) work intersects with several other concepts. First, their work is a perfect example of Michael Lynch’s idea of “subfunctionalization”, where a gene with several functions duplicates, and each duplicate loses a separate function of the original. (Force et al, 1999) Again, however, the question of how the multiple functions arose in the first place is begged. Second, it intersects somewhat with the recent paper by Austin Hughes (2011) in which he proposes a non-selective mechanism of evolution abbreviated “PRM” (plasticity-relaxation-mutation), where a “plastic” organism able to survive in many environments settles down in one and loses by degradative mutation and drift the primordial plasticity. But again, where did those primordial functions come from? It seems like some notable workers are converging on the idea that the information for life was all present at the beginning, and life diversifies by losing pieces of that information. That concept is quite compatible with intelligent design. Not so much with Darwinism. Finally, Thornton and colleagues latest work points to strong limits on the sort of neutral evolution that their own work envisions. The steps needed for the scenario proposed by Finnegan et al (2012) are few and simple: 1) a gene duplication; 2) a point mutation; 3) a second point mutation. No event is deleterious. Each event spreads in the population by neutral drift. Notice that the two point mutations do not have to happen together. They are independent, and can happen in either order. Nonetheless, this scenario is apparently exceedingly rare. It seems to have happened a total of one (that is, 1) time in the billion years since the divergence of fungi from other eukaryotes. It happened only once in the fungi, and a total of zero times in the other eukaryotic branches of life. If the scenario were in fact as easy to achieve in nature as it is to describe in writing, we should expect it to have happened many times independently in fungi and also to have happened in all other branches of eukaryotes. http://behe.uncommondescent.com/2012/01/a-blind-man-carrying-a-legless-man-can-safely-cross-the-street/ bornagain77
The fact remains that bacteria do not go extinct, and rivers do flow uphill through evaporation and rainfall. All your contorted reasoning is trumped by simple observation of reality. Petrushka
Sanford’s pro-ID thesis supported by PNAS paper, read it and weep, literally - September 2010 Excerpt: Unfortunately, it has become increasingly clear that most of the mutation load is associated with mutations with very small effects distributed at unpredictable locations over the entire genome, rendering the prospects for long-term management of the human gene pool by genetic counseling highly unlikely for all but perhaps a few hundred key loci underlying debilitating monogenic genetic disorders (such as those focused on in the present study). https://uncommondescent.com/darwinism/sanfords-pro-id-thesis-supported-by-pnas-paper-read-it-and-weep-literally/ Unexpectedly small effects of mutations in bacteria bring new perspectives - November 2010 Excerpt: Most mutations in the genes of the Salmonella bacterium have a surprisingly small negative impact on bacterial fitness. And this is the case regardless whether they lead to changes in the bacterial proteins or not.,,, using extremely sensitive growth measurements, doctoral candidate Peter Lind showed that most mutations reduced the rate of growth of bacteria by only 0.500 percent. No mutations completely disabled the function of the proteins, and very few had no impact at all. Even more surprising was the fact that mutations that do not change the protein sequence had negative effects similar to those of mutations that led to substitution of amino acids. A possible explanation is that most mutations may have their negative effect by altering mRNA structure, not proteins, as is commonly assumed. http://www.physorg.com/news/2010-11-unexpectedly-small-effects-mutations-bacteria.html
To get around the problem of slightly detrimental mutations, that are below the power of Natural Selection to remove from the genome, Darwinists have tried to relabel the slightly detrimental mutations as 'neutral', but the fact is that if a mutation is 'just sitting there', not doing anything, then it is in reality placing a energetic burden on the cell and is not truly 'neutral' but is in fact slightly detrimental. Dr. Berlinski comments on the ad hoc 'neutral theory' of evolution here:
Majestic Ascent: Berlinski on Darwin on Trial - David Berlinski - November 2011 Excerpt: The publication in 1983 of Motoo Kimura's The Neutral Theory of Molecular Evolution consolidated ideas that Kimura had introduced in the late 1960s. On the molecular level, evolution is entirely stochastic, and if it proceeds at all, it proceeds by drift along a leaves-and-current model. Kimura's theories left the emergence of complex biological structures an enigma, but they played an important role in the local economy of belief. They allowed biologists to affirm that they welcomed responsible criticism. "A critique of neo-Darwinism," the Dutch biologist Gert Korthof boasted, "can be incorporated into neo-Darwinism if there is evidence and a good theory, which contributes to the progress of science." By this standard, if the Archangel Gabriel were to accept personal responsibility for the Cambrian explosion, his views would be widely described as neo-Darwinian. http://www.evolutionnews.org/2011/11/berlinski_on_darwin_on_trial053171.html
As well, the slow accumulation of 'slightly detrimental mutations' in humans, that is 'slightly detrimental mutations' which are far below the power of natural selection to remove from our genomes, is revealed by these following facts:
“When first cousins marry, their children have a reduction of life expectancy of nearly 10 years. Why is this? It is because inbreeding exposes the genetic mistakes within the genome (slightly detrimental recessive mutations) that have not yet had time to “come to the surface”. Inbreeding is like a sneak preview, or foreshadowing, of where we are going to be genetically as a whole as a species in the future. The reduced life expectancy of inbred children reflects the overall aging of the genome that has accumulated thus far, and reveals the hidden reservoir of genetic damage that have been accumulating in our genomes." Sanford; Genetic Entropy; page 147 Children of incest - Journal of Pediatrics Abstract: Twenty-nine children of brother-sister or father-daughter matings were studied. Twenty-one were ascertained because of the history of incest, eight because of signs or symptoms in the child. In the first group of 21 children, 12 had abnormalities, which were severe in nine (43%). In one of these the disorder was autosomal recessive. All eight of the group referred with signs or symptoms had abnormalities, three from recessive disorders. The high empiric risk for severe problems in the children of such close consanguineous matings should be borne in mind, as most of these infants are relinquished for adoption. http://www.jpeds.com/article/S0022-3476%2882%2980347-8/abstract
Real world computer simulations are here
Using Computer Simulation to Understand Mutation Accumulation Dynamics and Genetic Load: Excerpt: We apply a biologically realistic forward-time population genetics program to study human mutation accumulation under a wide-range of circumstances.,, Our numerical simulations consistently show that deleterious mutations accumulate linearly across a large portion of the relevant parameter space. http://bioinformatics.cau.edu.cn/lecture/chinaproof.pdf MENDEL’S ACCOUNTANT: J. SANFORD†, J. BAUMGARDNER‡, W. BREWER§, P. GIBSON¶, AND W. REMINE http://mendelsaccount.sourceforge.net/ The GS (genetic selection) Principle - David L. Abel - 2009 Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function. No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.” http://www.bioscience.org/2009/v14/af/3426/fulltext.htm Experimental Evolution in Fruit Flies - October 2010 Excerpt: "This research really upends the dominant paradigm about how species evolve".,,, as stated in regards to the 35 year experimental failure to fixate a single beneficial mutation within fruit flies. http://www.arn.org/blogs/index.php/literature/2010/10/07/experimental_evolution_in_fruit_flies
Further notes on mutations
"I have seen estimates of the incidence of the ratio of deleterious-to-beneficial mutations which range from one in one thousand up to one in one million. The best estimates seem to be one in one million (Gerrish and Lenski, 1998). The actual rate of beneficial mutations is so extremely low as to thwart any actual measurement (Bataillon, 2000, Elena et al, 1998). Therefore, I cannot ...accurately represent how rare such beneficial mutations really are." (J.C. Sanford; Genetic Entropy page 24) - 2005 Estimation of spontaneous genome-wide mutation rate parameters: whither beneficial mutations? (Thomas Bataillon) Abstract......It is argued that, although most if not all mutations detected in mutation accumulation experiments are deleterious, the question of the rate of favourable mutations (and their effects) is still a matter for debate. Distribution of fitness effects caused by random insertion mutations in Escherichia coli Excerpt: At least 80% of the mutations had a significant negative effect on fitness, whereas none of the mutations had a significant positive effect. “But in all the reading I’ve done in the life-sciences literature, I’ve never found a mutation that added information… All point mutations that have been studied on the molecular level turn out to reduce the genetic information and not increase it.” Lee Spetner - Ph.D. Physics - MIT - Not By Chance John Sanford writes in “Genetic Entropy & the Mystery of the Genome”: “Bergman (2004) has studied the topic of beneficial mutations. Among other things, he did a simple literature search via Biological Abstracts and Medline. He found 453,732 ‘mutation’ hits, but among these only 186 mentioned the word ‘beneficial’ (about 4 in 10,000). When those 186 references were reviewed, almost all the presumed ‘beneficial mutations’ were only beneficial in a very narrow sense–but each mutation consistently involved loss of function changes–hence loss of information. While it is almost universally accepted that beneficial (information creating) mutations must occur, this belief seems to be based upon uncritical acceptance of RM/NS, rather than upon any actual evidence. I do not doubt there are beneficial mutations as evidenced by rapid adaptation yet I contest the fact that they build meaningful information in the genome instead of degrade preexisting information in the genome.” (pp. 26-27) “Mutations, in summary, tend to induce sickness, death, or deficiencies. No evidence in the vast literature of heredity change shows unambiguous evidence that random mutation itself, even with geographical isolation of populations leads to speciation.” Lynn Margulis - Acquiring Genomes [2003], p. 29. “But there is no evidence that DNA mutations can provide the sorts of variation needed for evolution… There is no evidence for beneficial mutations at the level of macroevolution, but there is also no evidence at the level of what is commonly regarded as microevolution.” Jonathan Wells (PhD. - Molecular Biology) "Of carefully studied mutations, most have been found to be harmful to organisms, and most of the remainder seem to have neither positive nor negative effect. Mutations that are actually beneficial are extraordinarily rare and involve insignificant changes. Mutations seem to be much more degenerative than constructive…" Kurt Wise, paleontologist (2002, p.163) "The neo-Darwinians would like us to believe that large evolutionary changes can result from a series of small events if there are enough of them. But if these events all lose information they can’t be the steps in the kind of evolution the neo-Darwin theory is supposed to explain, no matter how many mutations there are. Whoever thinks macroevolution can be made by mutations that lose information is like the merchant who lost a little money on every sale but thought he could make it up on volume." Lee Spetner (Ph.D. Physics - MIT - Not By Chance) "The opportune appearance of mutations permitting animals and plants to meet their needs seems hard to believe. Yet the Darwinian theory is even more demanding: a single plant, a single animal would require thousands and thousands of lucky, appropriate events. Thus, miracles would become the rule: events with an infinitesimal probability could not fail to occur,,, There is no law against day dreaming, but science must not indulge in it." Pierre P. Grasse - past President of the French Academie des Sciences Mutations: evolution’s engine becomes evolution’s end! - Article Highlighting The Technical Points Of Genetic Entropy Excerpt: recent discoveries show that mutations interfere with all molecular machinery. Life’s error correction, avoidance and repair mechanisms themselves suffer the same damage and decay. The consequence is that all multicellular life on earth is undergoing inexorable genome decay. Darwin Was Wrong: A Study in Probabilities "To propose and argue that mutations even in tandem with 'natural selection' are the root-causes for 6,000,000 viable, enormously complex species, is to mock logic, deny the weight of evidence, and reject the fundamentals of mathematical probability." Cohen, I.L. (1984) - No Beneficial Mutations - Not By Chance - Evolution: Theory In Crisis - Lee Spetner - Michael Denton - video http://www.metacafe.com/watch/4036816 “Mutations are rare phenomena, and a simultaneous change of even two amino acid residues in one protein is totally unlikely. One could think, for instance, that by constantly changing amino acids one by one, it will eventually be possible to change the entire sequence substantially… These minor changes, however, are bound to eventually result in a situation in which the enzyme has ceased to perform its previous function but has not yet begun its ‘new duties’. It is at this point it will be destroyed - along with the organism carrying it.” Maxim D. Frank-Kamenetski, Unraveling DNA, 1997, p. 72. (Professor at Brown U. Center for Advanced Biotechnology and Biomedical Engineering) ...Advantageous anatomical mutations are never observed. The four-winged fruit fly is a case in point: The second set of wings lacks flight muscles, so the useless appendages interfere with flying and mating, and the mutant fly cannot survive long outside the laboratory. Similar mutations in other genes also produce various anatomical deformations, but they are harmful, too. In 1963, Harvard evolutionary biologist Ernst Mayr wrote that the resulting mutants “are such evident freaks that these monsters can be designated only as ‘hopeless.’ They are so utterly unbalanced that they would not have the slightest chance of escaping elimination through natural selection." - Jonathan Wells "A Dutch zoologist, J.J. Duyvene de Wit, clearly demonstrated that the process of speciation (such as the appearance of many varieties of dogs and cats) is inevitably bound up with genetic depletion as a result of natural selection. When this scientifically established fact is applied to the question of whether man could have evolved from ape-like animals,'.. the transformist concept of progressive evolution is pierced in its very vitals.' The reason for this, J.J. Duyvene de Wit went on to explain, is that the whole process of evolution from animal to man " ' . . would have to run against the gradient of genetic depletion. That is to say, . . man )should possess] a smaller gene-potential than his animal ancestors! [I] Here, the impressive absurdity becomes clear in which the transformist doctrine [the theory of evolution] entangles itself when, in flat contradiction to the factual scientific evidence, it dogmatically asserts that man has evolved from the animal kingdom!" —Op. cit., pp. 129-130. [Italics his; quotations from *J.J. Duyvene de Wit, A New Critique of the Transformist Principle in Evolutionary Biology (1965), p. 56,57.]
bornagain77
you state; 'If genetic entropy were actually a problem, the fastest replicators — viruses, bacteria — would go extinct. Doesn’t happen.' Pretty simplistic reasoning pet, given the multiple layers of DNA repair mechanisms now being found in the cell (a VERY anti Darwinian discovery!), unfortunately for you many more lines of 'real world' evidence completely supports GE:
Genetic Entropy - Dr. John Sanford - Evolution vs. Reality http://vimeo.com/35088933 Mutations Prove Creation with Dr. Jerry Bergman Part 1 - video http://www.youtube.com/watch?v=i9ue1P50L48 See Part 2 here: http://www.youtube.com/watch?v=Pvjsy_p6eSs
The evidence for the detrimental nature of mutations in humans is overwhelming for scientists have already cited over 100,000 mutational disorders.
Inside the Human Genome: A Case for Non-Intelligent Design - Pg. 57 By John C. Avise Excerpt: "Another compilation of gene lesions responsible for inherited diseases is the web-based Human Gene Mutation Database (HGMD). Recent versions of HGMD describe more than 75,000 different disease causing mutations identified to date in Homo-sapiens."
I went to the mutation database website cited by John Avise and found this stated in 2009:
HGMD®: Now celebrating our 100,000 mutation milestone!
I really question their use of the word 'celebrating'. (Of note, apparently someone with a sense of decency has now removed the word 'celebrating')
"Mutations" by Dr. Gary Parker Excerpt: human beings are now subject to over 3500 mutational disorders. (of note: this 3500 figure is cited from the late 1980's) http://www.answersingenesis.org/home/area/cfol/ch2-mutations.asp
This following study confirmed the detrimental mutation rate for humans, of 100 to 300 per generation, estimated by John Sanford in his book 'Genetic Entropy' in 2005:
Human mutation rate revealed: August 2009 Every time human DNA is passed from one generation to the next it accumulates 100–200 new mutations, according to a DNA-sequencing analysis of the Y chromosome. (Of note: this number is derived after "compensatory mutations") http://www.nature.com/news/2009/090827/full/news.2009.864.html
This more recent study found a slightly lower figure:
We Are All Mutants: First Direct Whole-Genome Measure of Human Mutation Predicts 60 New Mutations in Each of Us - June 2011 http://www.sciencedaily.com/releases/2011/06/110613012758.htm
This 'slightly detrimental' mutation rate of 100 to 200, or even 60, per generation is far greater than even what evolutionists agree is an acceptable mutation rate for an organism:
Beyond A 'Speed Limit' On Mutations, Species Risk Extinction Excerpt: Shakhnovich's group found that for most organisms, including viruses and bacteria, an organism's rate of genome mutation must stay below 6 mutations per genome per generation to prevent the accumulation of too many potentially lethal changes in genetic material. http://www.sciencedaily.com/releases/2007/10/071001172753.htm Contamination of the genome by very slightly deleterious mutations: why have we not died 100 times over? Kondrashov A.S. http://www.ingentaconnect.com/content/ap/jt/1995/00000175/00000004/art00167 The Frailty of the Darwinian Hypothesis "The net effect of genetic drift in such (vertebrate) populations is “to encourage the fixation of mildly deleterious mutations and discourage the promotion of beneficial mutations,” http://www.evolutionnews.org/2009/07/the_frailty_of_the_darwinian_h.html#more
bornagain77
Petrushka, love you man, but you are completely wrong, as usual; You state that second law principles are not violated by 'work', perhaps not violated by 'work', I never said work did violate it, but OOL and neo-Darwinian evolution is certainly a flagrant violation of the second law.
"The laws of probability apply to open as well as closed systems." Granville Sewell - Professor Of Mathematics - University Of Texas El Paso Can “ANYTHING” Happen in an Open System? - Granville Sewell PhD. Math Excerpt: If we found evidence that DNA, auto parts, computer chips, and books entered through the Earth’s atmosphere at some time in the past, then perhaps the appearance of humans, cars, computers, and encyclopedias on a previously barren planet could be explained without postulating a violation of the second law here (it would have been violated somewhere else!). http://www.math.utep.edu/Faculty/sewell/articles/appendixd.pdf Granville Sewell - Mathematics Dept. University of Texas El Paso (Papers and Videos) http://www.math.utep.edu/Faculty/sewell/ Peer-Reviewed Paper Investigating Origin of Information Endorses Irreducible Complexity and Intelligent Design - Casey Luskin - July 2010 Excerpt: It has often been asserted that the logical entropy of a non-isolated system could reduce, and thereby new information could occur at the expense of increasing entropy elsewhere, and without the involvement of intelligence. In this paper, we have sought to refute this claim on the basis that this is not a sufficient condition to achieve a rise in local order. One always needs a machine in place to make use of an influx of new energy and a new machine inevitably involves the systematic raising of free energies for such machines to work. Intelligence is a pre-requisite. http://www.evolutionnews.org/2010/07/peer-reviewed_paper_investigat036771.html Evolution Vs. Thermodynamics - Open System Refutation - Thomas Kindell - video http://www.metacafe.com/watch/4143014 "there are no known violations of the second law of thermodynamics. Ordinarily the second law is stated for isolated systems, but the second law applies equally well to open systems." John Ross, Chemical and Engineering News, 7 July 1980 "...the quantity of entropy generated locally cannot be negative irrespective of whether the system is isolated or not." Arnold Sommerfel, Thermodynamics And Statistical Mechanics, p.155 "Bertalanffy (1968) called the relation between irreversible thermodynamics and information theory one of the most fundamental unsolved problems in biology." Charles J. Smith - Biosystems, Vol.1, p259.
bornagain77
Work does not violate any second law principles. Conservation laws do not apply to systems that learn via feedback. As long as there is an energy gradient, work can be done without any violation of any second law principle. If genetic entropy were actually a problem, the fastest replicators -- viruses, bacteria -- would go extinct. Doesn't happen. Current laboratory experiments -- Lenski, Thornton -- support a functional space that can be connected. Petrushka
I didn't say "what did happen at the ool". Thats taken, it seems to me, as axiomatic, and then you can look at bacteria, metazoans , etc. Starbuck
Petrushka, and why is there 'no point' in asking what happened at the origin of life because 'We’ll never know', yet at the same time you claim we can know for certainty what subsequently happened right past the OOL??? i.e. Why the double standard? And why are not current laboratory experiments good enough for you to refute Darwinism and ANY OOL scenarios as completely ludicrous??? Do you find a loophole around the Second Law (Sewell)??? Did you find a loophole around COI (Dembski-Marks)??? Did you find a loophole around the First Rule (Behe)?, or Genetic Entropy (Sanford)? Exactly why are you so biased as to the point of complete scientific blindness? bornagain77
I see no point in asking what did happen at OOL. We'll never know. We can, however, explore whether it is possible for replicators to arise through regular processes. Petrushka
I think there is one key difference between your perspective on front-loading, and Mike Gene's. That is, in my opinion, that Mike Gene doesn't really concern himself with the question of how "How could sophisticated molecular systems be front-loaded?" For him it's not a matter of what can or can't happen, but what did happen. As such I don't think he allows ID type complexity arguments into his thinking on FLE, unless I'm mistaken. Starbuck
Biological complexity. Front-loading predicts that the last universal common ancestor (LUCA) was quite complex, complete with genes necessary for the origin and development of metazoan life forms.
That sounds like an impressive spec, but what with horizontal transfer, it becomes problematic defining what you mean by LUCA. The first metazoans would have most of the genes found in modern ones, since they were invented by microbes.
The front-loading hypothesis proposes that the universal optimal genetic code was present at the dawn of life: in other words, we won’t find precursors of sub-optimal genetic codes, because the genetic code was optimal from the start.
So OOL research is relevant or not, assuming it eventually supports a bootstrap scenario? Petrushka

Leave a Reply