Uncommon Descent Serving The Intelligent Design Community

The First Gene: An information theory look at the origin of life

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
The First Gene: The Birth of Programming, Messaging and Formal Control

Here, edited by David Abel, The First Gene: The Birth of Programming, Messaging and Formal Control :

“The First Gene: The Birth of Programming, Messaging and Formal Control” is a peer-reviewed anthology of papers that focuses, for the first time, entirely on the following difficult scientific questions: *How did physics and chemistry write the first genetic instructions? *How could a prebiotic (pre-life, inanimate) environment consisting of nothing but chance and necessity have programmed logic gates, decision nodes, configurable-switch settings, and prescriptive information using a symbolic system of codons (three nucleotides per unit/block of code)? The codon table is formal, not physical. It has also been shown to be conceptually ideal. *How did primordial nature know how to write in redundancy codes that maximally protect information? *How did mere physics encode and decode linear digital instructions that are not determined by physical interactions? All known life is networked and cybernetic. “Cybernetics” is the study of various means of steering, organizing and controlling objects and events toward producing utility. The constraints of initial conditions and the physical laws themselves are blind and indifferent to functional success. Only controls, not constraints, steer events toward the goal of usefulness (e.g., becoming alive or staying alive). Life-origin science cannot advance until first answering these questions: *1-How does nonphysical programming arise out of physicality to then establish control over that physicality? *2-How did inanimate nature give rise to a formally-directed, linear, digital, symbol-based and cybernetic-rich life? *3-What are the necessary and sufficient conditions for turning physics and chemistry into formal controls, regulation, organization, engineering, and computational feats? “The First Gene” directly addresses these questions.

As we write, it is #2 in biophysics, and the trolls haven’t even got there yet.

Here’s Casey Luskin’s review:

Materialists Beware: The First Gene Defends a Strictly Scientific, Non-Materialist Conception of Biological Origins:

The First Gene investigates a number of different types of information that we find in nature, including prescriptive information, semantic information, and Shannon information. Prescriptive information is what directs our choices, and it is a form of semantic information — which is a type of functional information. In contrast, Shannon information, according to Abel, shouldn’t even be called “information” because it’s really a measure of a reduction in certainty, and by itself cannot do anything to “prescribe or generate formal function.” (p. 11) Making arguments similar to those embodied in Dembski’s law of conservation of information, Abel argues that “Shannon uncertainty cannot progress to becoming [Functional Information] without smuggling in positive information from an external source.” (p. 12) The highest form of information, however, is prescriptive information:

Comments
What does it mean if you say in once sentence that you can’t do something, and in the next that it might be possible? Doesn’t that negate the first statement that it can’t be done?
It means that I accept the conventional wisdom that faster than light travel is impossible, time travel to the past is impossible, ESP doesn't exist, UFOs are not alien spaceships, and so forth, but I am not emotionally attached to these beliefs and I read about counterclaims with interest. I have not said protein design is impossible. I have said it is not possible with the known resources of the universe to predict the utility of coding sequences. But I could be wrong. Feel free to demonstrate that you can take an existing coding string, make a point mutation, and predict the change in utility. This is what I would like to call the Douglas Axe conundrum. Most changes to coding strings are detrimental to utility. Tiny changes can have catastrophic effects. My point is that this makes evolution unlikely, but it makes design without evolution impossible. All the commercial molecule designers use directed evolution. Petrushka
To explain the existence of recorded information, we need a mechanism to satisfy the observed physical consequences of recorded information
I read the post. You might note that my recent question quotes from it. But since the indenting on the forum can get confusing, let me repeat it. When you refer to “the rise of the recorded information in the genome” are you referring to each and every instance of information change or increase, or are you referring to the origin of the system, or are you referring to something like irreducible structures? Petrushka
Petrushka, I am going to take you at face value. If this post is not something you can unserstand, then frankly, you have no business making the claims on this forum that you've made. As for Dr Liddle's Skeptical Zone, we are still awaiting her response. The dilly dallying of the commentors there exist not becuase they don't understanding the argument, but because they have no rebuttal to it. Upright BiPed
Please humor me and try to answer a simple question. When you refer to "the rise of the recorded information in the genome" are you referring to each an every instance of information change or increase, or are you referring to the origin of the system, or are you referring to something like irreducible structures? Petrushka
A lot hinges on this contradictory statement:
One reason the analogy with computer code fails is the problem of translation. Just as you can’t really translate poetry from one human language to another, you can’t translate DNA to computer simulations and design completely novel structures with complete accuracy. It might be possible, but not with our current understanding of physics.
What does it mean if you say in once sentence that you can't do something, and in the next that it might be possible? Doesn't that negate the first statement that it can't be done? You're relying a lot on this "it can't be done" argument, and by your own words you don't believe it yourself. I've lost count of how many times you've repeated it, and now you've unraveled it. Doesn't that negate every single post in which you've said it's impossible? ScottAndrews2
You are correct to assume I don't follow your argument. Most importantly, I can't tell whether it applies to the origin of life or to evolution, or to some ongoing feature of life. I have read through the thread at Skeptical Zone several time in hopes that someone could clarify your position or offer an alternative explanation of it, but I didn't see anyone there who understood it. You have posted a lot of words, but I think you begin to make your point here:
Therefore, the search for an answer to the rise of the recorded information in the genome needs to focus on mechanisms that can give rise to a semiotic state, since that is the way we find it. We need a mechanism that can cause an arrangement of matter to serve as a physical representation.
That seems to be an origins question. Petrushka
Information is a label used to describe a process. The process is chemical. If you are into dictionary definitions, look up reification.
The definition of information that I operate under is the standard, dictionary sense (from the latin verb informare: to give form to, to in-form). Sorry, but I really have no interest in now playing a game of definiton derby, particularly since the dynamics of the phenomena are already understood and coherently described. Such games become evident when someone objects to the use of a term, but fails to demonstrate that the term was misused. Moreover, after reading your posts this morning, it becomes clear to me that you really have not quite conceptualized what is at issue. Perhaps the best response is to back out and allow you a chance to process the information. I can tell you that protein folds, computer simulations, and the differing rhythms of human speech have nothing whatsoever to do with it. later... Upright BiPed
One reason the analogy with computer code fails is the problem of translation. Just as you can't really translate poetry from one human language to another, you can't translate DNA to computer simulations and design completely novel structures with complete accuracy. It might be possible, but not with our current understanding of physics. Check gpuccio's link. In human language, deep translation fails because you can't simultaneously translate rhythm, connotation, denotation, melody and phrasing. I'm not enough of a physicist to know why chemistry can't be completely simulated. I just both that the people attempting it fall back to chemistry and directed evolution for industrial design. Petrushka
In any case, biologists (as demonstrated by Larry Moran himself) routinely view the information transfer in the genome as only being analogous to semiotic transfer.
Information is a label used to describe a process. The process is chemical. If you are into dictionary definitions, look up reification. Take a look at gpuccio's discussion and link regarding protein design. Chemistry does stuff that can't be abstracted and can't be modeled completely with our current knowledge of quantum mechanics. Chemistry is faster than computation, and our best efforts to predict chemistry are both painfully slow and inaccurate. I grant that DNA embodies a code, but it is a code that defies abstraction. Look at gpuccio's link. Our best efforts to design completely novel proteins using atom by atom simulation and quantum theory are both slow and inaccurate. We can't read the code except by running the chemistry. We can't design it except by running the chemistry, using cut and try. Petrushka
Aside from the big words you have restated what biologists have known since about 1968, that DNA embodies a code that is interpreted by the cellular machinery.
Being a follower of biology, surely you can point me to a paper where these particular physical observations are being advanced, or even discussed. A semiotic genome would virtually falsify materialism, so I bet you won't find much. In any case, biologists (as demonstrated by Larry Moran himself) routinely view the information transfer in the genome as only being analogous to semiotic transfer. He and they are wrong. It's semiotic state has physical entailments which can be coherently observed.
No one really knows if the system evolved or was poofed into existence. reasonable people can believe it was poofed, and reasonable people can research the problem under the assumption it might have a natural origin.
You are correct, a research program to study the possible unguided origin of Life is perfectly acceptible, and by logical extention, so is a program to study a possible guided origin of Life. But what science cannot do (from the level of peer-reviewed publishing, up through the public domain) is ignore that the information transfer in the genome is semiotic; it very observably is. We know this is true, and we know why it is true. It's a system that demonstrates the same dynamic entailments as any other semiotic information system. Those entailments have been observed and understood, and those observations stand unrefuted.
I fail to see the source of your hostility toward me.
Please don't conflate directness for hostility, after all, this is a competition of ideas, and you have been no less strident in defending yours and attacking mine. To this very moment you haven't actually engaged the content of the argument being made, yet you are quick to demean ID with repeated little quips of "magic" and "poof". I have not reacted to any of them. But having said that, if you feel I have overstepped into hostility, then I do apologize. Rest assured, any hostility is directed at your argument. Upright BiPed
Aside from the big words you have restated what biologists have known since about 1968, that DNA embodies a code that is interpreted by the cellular machinery. No one really knows if the system evolved or was poofed into existence. reasonable people can believe it was poofed, and reasonable people can research the problem under the assumption it might have a natural origin. A third, but not exhaustive possibility is that the universe is rigged to make it happen without additional intervention. If this is the case, research will show that a natural origin is possible. I fail to see the source of your hostility toward me. Petrushka
Properties are immaterial or abstract by definition.
Magnets of opposing charges will repel each other regardless of an observer. That quality is a part of their material properties. The only immaterial thing connected to that reaction is if I observe it and record the information, but whether that happens or not, it would not alter the reaction itself. However, if I take those magnets and use them to spell my name across the refrigerator door, then they have taken on an immaterial quality that they themselves can never attain from their physical make-up. They signify something that they have absolutely no material connection to, and therefore a physical protocol is required in order for that immaterial relationship to be realized. It's a relationship that otherwise wouldn't exist. So let's not equivocate on definitons, it is a waste of time.
What I’m asking is the nature of the event in which molecules were arranged in a way that enabled them to reproduce and evolve.
Because we weren't there when it happened, and because we try to be careful not to make unsupported assumptions, we are left to a rational observtion of the physical dynamics themselves in action. Without the slightest equivocation, the information storage and transfer system in the genome displays the same physical objects and physical dynamics as any other form of recorded information transfer ever observed. By satisfying each of the physical entailments of recorded information transfer, it demonstrates that its nature is semiotic. The coherent observation of its semiotic state must be the guide for any claims made with material integrity. Upright BiPed
Properties are immaterial or abstract by definition. What I'm asking is the nature of the event in which molecules were arranged in a way that enabled them to reproduce and evolve. When did this happen? Did it happen more than once? Was it updated repeatedly? I promise not to ask by whom. I just want to know where the natural formation of organic molecules left off and the designed molecules began. Petrushka
Petrushka, you have already been given the evidence of immaterial properties observed to exist in the translation system. Time to spin again. Upright BiPed
Quite frankly, the only opponent on this forum that has shown any guts (any intellectual stomach at all) has been Dr Elizabeth Liddle. Upright BiPed
I'm not aware of any properties of anything that are not instantiated. Perhaps if you would identify the event or events you are referring to, with a before and after snapshot of the relevant system. Petrushka
How does an immaterial property become instantiated into a physical object Petrushka?? Upright BiPed
…and yet you still haven’t acknowledged the [observable] evidence. You simply can't do it in earnest. Your ideology doesn't leave you the empirical sovereingty to do so. Upright BiPed
Protein assembly is chemistry. The code that specifies them evolves. The origin of the cellular machinery is a mystery. I suspect the origin is closer to that described by Michael Denton in Nature's Destiny than it is to intervention by unnamed demiurges, but I have no great emotional attachment to hypothetical causes. I do have an attachment to ways of researching mysteries, and I am more impressed by Szostak than by anyone in the ID movement. But I have not personal commitment to Szostak being correct. We might never solve the mystery. I would place a small bet it will remain a mystery for my lifetime. I have lots of conjectures without deep emotional ties. I suspect that strong AI is possible, but I see nothing on the horizon that will lead to it in my lifetime. I think faster than light travel is impossible, but I watch with curiosity the recent experiments. Petrushka
…and you still haven’t acknowledged the evidence. Upright BiPed
Petrushka, you have made dozens of remarks that proteins cannot be the result of design. Yet, proteins are assembled as the result of processing semiotic information. This is an observed fact. You now want to equivocate and play the ridiculous school-yard suggestion that "OH YES, the information processing system required for protein synthesis may show evidence of design, but I am not talking about that, I am only talking about the proteins themselves..." Very convincing. Upright BiPed
If your evidence doesn't dispute common descent, I see no reason to be interested in it. Petrushka
Where Have I said it was? Link to a post of mine where I make claims about the origin of life. Petrushka
...and you still haven't acknowledged the evidence. Upright BiPed
The evidence falsifies the claim that Life is the result of unguided chemical origins. Upright BiPed
If it's not anti-evolution, it's not against me. What is it you think I am that the evidence is against? Petrushka
ID is not anti-evolution Petrushka, and its not anti-common descent either. You already know this. Yet, at the same time, you haven't even acknowledged the evidence against you. Upright BiPed
Interesting claim. I was just reading a November article that describes 60 genes new to humans that have clear sequence similarities to non-coding sequences in chimps. Indicating that new gene formation via mutation is not rare. The new genes appear to have low functionality, which is what you would expect if they haven't been refined over time. http://www.plosgenetics.org/article/info:doi/10.1371/journal.pgen.1002379 Petrushka
That's right Petrushka, your ideology has been blown up by modern molecular science. The best thing to do is not notice. Upright BiPed
I suppose we are deadlocked then. You accept that evolution happens, but don't think it can produce anything useful. I have seen not seen any design events nor any sightings of the designer. Nor has anyone demonstrated that it is feasible to design living things from scratch without using evolutionary processes. Petrushka
observed rates of mutation can provide the the rate of population change necessary to account for the differences between species.
That would mean something if all that was required was a rate of change. My car can consistently drive 100mph. That does not make driving to the moon a problem of distance/rate. Rate of change means nothing. ScottAndrews2
Petrushka, That's not what I meant. You said:
Lenski and thornton are committed to demonstrating that evolution can traverse landscapes that include neutral or slightly detrimental mutations.
We already know that evolution can traverse landscapes consisting of neutral and detrimental mutations. It's such an obvious statement that I don't know why they would have to commit themselves to demonstrating it. ScottAndrews2
Also because this kind of work provides a detailed account of the landscape and counters the argument that it cannot be traversed. And because Lenski's work in particular demonstrated that a small colony of bacteria can explore every possible point mutation in a couple of decades -- a confirmation that observed rates of mutation can provide the the rate of population change necessary to account for the differences between species. Petrushka
Because Behe's Edge argues that three step adaptations are too improbable to occur without design. Evolution News and views specifically cited Thornton's reconstruction of a "gap" too improbable to occur in nature. Claims like that are tedious and time consuming to research. Lenski's work has taken decades. Petrushka
Back up:
Lenski and thornton are committed to demonstrating that evolution can traverse landscapes that include neutral or slightly detrimental mutations.
Why are they committed to demonstrating what is observed with every successive generation of pretty much every living thing? ScottAndrews2
people like Lenski and thornton are committed to demonstrating that evolution can traverse landscapes that include neutral or slightly detrimental mutations.
Research may as well come to a screeching halt if they are committed to the conclusion they are looking for.
What would bring research to a halt is if anyone took ID seriously.
I asked for an explanation for this reasoning, and you respond by saying it again. If all we're going to do is assert without explaining, I can do it too. Anyone taking ID seriously would not bring research to a screeching halt. Which research, by the way? Do you mean the collapse of science in general, or did you have something specific in mind? ScottAndrews2
I hope I don’t need to point out the problem with your island and ocean metaphor.
If you understand that it's metaphor then why are you attempting to apply it literally? Nonetheless...
Islands are connected to the mainland, and water — even 500 miles of it — has not been an absolute barrier to the passage of living things.
The only evidence you have that monkeys ever sailed oceans is that the monkeys are there and the evolutionary timeline requires it. A glaring contradiction between the theory and the evidence is impossible to believe. Breeding pairs of monkeys doing a reverse Castaway? If that's what it takes, it must have happened. Theory first, evidence second.
Prove me wrong. Tell me how to interpret coding sequences, or tell me how to design a protein domain sequence.
Let's follow this where it leads. I'll just make something up. Sending people to Mars and returning them safely is impossible. It's a fact. And the only way for you to dispute that fact is to design the spacecraft and show me your mission plan. And if you won't, you must concede that it is impossible. Now that I've laid out my impeccable logic I'll sit back and watch for the layoffs at NASA. Let me save you the trouble of moving the goalposts. I'll do it for you: This is different! At least the people at NASA are working on it! (It just moves quicker this way.) So it was impossible until they started working on it and then became possible? You're building quite a bit on the foundation that if we don't know how to do something, it's impossible. It's such a fantastic assertion that I can't believe I spent a few paragraphs refuting it. But there it is. Will you continue to argue that deliberately designing functional protein folds without searching every combination is impossible? ScottAndrews2
Research is not going to come to a screeching halt for the simple reason that people like Lenski and thornton are committed to demonstrating that evolution can traverse landscapes that include neutral or slightly detrimental mutations. What would bring research to a halt is if anyone took ID seriously. How many ID proponents are actively looking for gap fillers? Petrushka
I hope I don't need to point out the problem with your island and ocean metaphor. Islands are connected to the mainland, and water -- even 500 miles of it -- has not been an absolute barrier to the passage of living things. Not knowing the exact history of the traversal does not imply that animals arrived by magic. And it is magic you are claiming. You cannot cite any instance of design nor can you describe how one would go about designing something as basic as a protein domain. The simple fact is that if design can't be incrementally modified by evolution, the only other option is magic. Prove me wrong. Tell me how to interpret coding sequences, or tell me how to design a protein domain sequence. Petrushka
I have a simple microwave and a more complex one. I have a bicycle and a motorcycle. I have a calculator and a computer. Each is a simplified model followed by a more complex version with more functionality. Those are not incremental changes, and there is no reason to think that you could ever get one from the other if you could reproduce them with variation and apply a fitness function. You might actually optimize your bicycle. But a series of undirected, incremental changes will never invent a combustion engine or an ignition. There's that line again between improvement and invention. Many, many years ago we observed (I will oversimplify) that A exists and B exists, and someone proposed that B evolved from A. We then began debating it. We still are. You're simply pointing out that A exists as if it's some exciting new discovery. We already knew this. It's not new information. It sheds no light on anything. ScottAndrews2
The genes have functions, just simpler functions. In this case, simpler versions of vision. Petrushka
If incremental evolution of vision is impossible, why are there incremental versions of vision in existing things?
I'm using the word "incremental" in the evolutionary sense - single genetic increments. I didn't think I needed to point that out since we're talking about evolution. This thing you linked to, what was the incremental step before it? What was the one after it? What comes three steps after it? Four? What in the world does the link you posted have to do with incremental change? I ask you to show me a traversable landscape and you point to a mountain 500 miles away in the middle of the ocean. ??? ScottAndrews2
We find genes for vision in organisms that don't have vision today, and never have had. We find genes for nerves in organisms that don't have nerves today, and never have had. Answer: Visual systems and nervous systems of organism with vision and nerves are so easily constructed that they don't even need to be selected for in order to appear. Poof. Now if we could just figure out how the observed immaterial formalities of information storage and translation (which make it all possible) came into being. Nahhhh, thats not important. Upright BiPed
Great- how do we test the claim that "evolutiondidit"? Joe
If incremental evolution of vision is impossible, why are there incremental versions of vision in existing things? http://www.nsf.gov/news/news_summ.jsp?cntn_id=110443 How much do you want to bet that we will not continue finding incremental versions of vision? Petrushka
Following in Darwin's footsteps, you place the onus on me to state that something is impossible. Then you'll want to know why it's impossible. That's not how it works. It works like this: I'll give you an example, the evolution of complex vision. I happen to think that its evolution in increments of variation and selection is impossible. I'm not saying that in a very scientific manner. It just seems as preposterous as anything I've ever heard. The dog is more likely to eat your homework. But it's just an opinion. Your job, or someone's, is to demonstrate that it is possible. That's admittedly difficult. I can't reasonably expect you to evolve an eye or anything else. At the same time, the extrapolation just doesn't cut it. People, for example, get taller, shorter, wider, narrower, hairier, balder, lighter, darker, etc., etc. I can't extrapolate from that to think that they ever weren't human or ever won't be. In fact, the limited range of variation within that and other species somewhat confirms the notion that variation only goes so far. I would settle for a plausible, hypothetical pathway that goes one selected, beneficial increment at a time from an sightless creature to one with complex vision. Or a pathway to some other similar complex novel function. Maybe even a substantial fragment of a pathway. Isn't that reasonable? Isn't at least one piece of one hypothetical pathway the barest minimum for asserting that such pathways exist, are common, and that all living things have traversed countless numbers of them? Conversely, without even one what is the basis for accepting the explanation? Otherwise what you have is a hypothesis without a hypothesis. I wouldn't even rule out something truly impressive from a GA as evidence. But I've seen the lists, and there's way too much intelligent input, processing, and goal-setting involved. I don't expect to see any such things because they violate the non-scientific common-sense principle that stuff just doesn't start doing stuff and doing it better, climbing a million-rung ladder of increased and improved function, all by itself. Reproduction with variation can't circumvent that. If you or anyone else ever proves me wrong, I'll be surprised, disbelieving, skeptical, pick it to pieces because it's probably fabricated, and then I'll go sit in a corner somewhere in a fetal position throwing ashes on myself. I promise. ScottAndrews2
You moved the smoke around, then simply took it for granted once again. This sleight-of-hand is evidenced by the fact that you refuse to acknowledge the observations against you. Your dismissive "we may never know" has a great history in the debate as well. It's a checkvalve that magically appears whenever a materialist gets cornered by the physical evidence. You are fooling no one but yourself. Upright BiPed
You’re switching from the capabilities of evolutionary mechanisms to evidence of descent which cannot be reduced to those mechanisms.
That's what I'm asking for. An example of descent that is not possible via known mechanisms of variation. Petrushka
The system that makes evolution possible is certainly not taken for granted. It is the central subject of study in biology. I think what you mean is that the origin of the system is unknown and may remain unknown. Petrushka
Petrushka, it has been shown over and over again that you simply take the system (which makes evolution possible) for granted, then argue over evolution being causally adequate to explain what is found in biology. How does anything you said in your post at 27.1.2.1.6 change any of that? It doesn't. It is merely smoke and mirrors, and meaningless to the issue. Upright BiPed
In other words, Petrushka simply takes the entire phenomena for granted, and then pretends it doesn’t matter.
Well yes, evolution is a description of how living populations change over times. It is not a final cause. Intelligence is not a final cause either. It does not explain it's own existence. Does it matter? Not for describing population change over time. Petrushka
Modern cyphers are an example of a class of problems that yield only to brute force.
Cyphers and encryption are intelligent, purposeful solutions. You don't use one unless you have a message, a purpose to communicate it, and foresight to determine that it should remain confidential. Then the design of the encryption/decryption process to meet that need is purely intelligent. As for breaking one, I'm sure that highly-paid, very intelligent cryptographers are thrilled to hear of their work being called "brute force." When you apply that term to sophisticated software running on designed computers it loses its meaning. Not to mention, what prompts someone to attempt decryption on a sea of seemingly random characters? What makes them think there is a message in there at all? Sounds like design and design detection. How do you look at something that is deliberate, purposeful, and intelligent at every last step and see nothing but evolution? You've got evolution-colored glasses on. Take them off and look around.
Give me an example of something in the plant or animal kingdom that appears to have no cousins. Or a structure that does not appear to be incrementally different from ancestral structures.
I'm incrementally different from my father, which supports your case how? You're switching from the capabilities of evolutionary mechanisms to evidence of descent which cannot be reduced to those mechanisms. In the examples you cite, can you elaborate on how evolution produced them - the incremental genetic changes and why each was selected? That's exactly what you need to cite them as evidence of darwinian evolution, and I'm pretty sure you don't have it. ScottAndrews2
You’re still arguing that designing functional proteins is impossible without searching every possible combination.
I'm taking the word of Douglas Axe, who has published on this very subject. But not his word alone. There are hard problems that don't have formulas for solutions. Modern cyphers are an example of a class of problems that yield only to brute force. There are many people looking for a shortcut for protein folding. Such a shortcut would be a prerequisite for biological design, assuming you are not going to allow some form of evolution. I'm not saying design is another word for cause. I'm saying that evolution designs. Pretty much the way many R&D programs design, by systematically trying and testing possibilities. Give me an example of something in the plant or animal kingdom that appears to have no cousins. Or a structure that does not appear to be incrementally different from ancestral structures. Petrushka
Well Timbo, the way to the design inference is THROUGH materialism, ie the premise that living organisms and the universe are reducible to matter and energy. Therefor the PROPER way to refute ID is to produce positive evidence for said materialism. And the proper way to criticize ID is to show how said materialism does it. Joe
Petrushka, Please get off the dope. Intelligent Design is NOT anti-evolution. Also there isn't anything in the observed processes of evolution that can be extrapolated to infer universal common descent. That means either there are or have been some unobserved processes or universal common descent is bogus. Joe
We agree that living things are designed. 2. I assert that the designer is evolution as described by mainstream biology.
Redefining "design" to include "not design" accomplishes nothing. So every effect is "designed" by its cause. Now design is another word for cause. Ripples in a puddle are designed by raindrops. You can call it whatever you want, and I'm not going to play dictionary games, but all you've done is add your own bizarre definition for the word in the hope that it will catch on. It won't. You are not talking about design. You're still arguing that designing functional proteins is impossible without searching every possible combination. By that reasoning, every post we type is impossible because we must search every possible combination of words. And yet I find myself matching words to a target, the thought I wish to express, without an iterative process that begins with a single word and evolves in functional, incremental steps, each compared against a target that doesn't exist. And it only took a minute. It's impossible. How did I do that? And what's with origami? Testing every imaginable crease in a piece of paper to reach a result that resembles an animal would take longer than a life span. How do they do that? You're creating a false choice between evolution and a random search, and it's a bad once since neither is capable. ScottAndrews2
The question being discussed is not the origin of the system but whether it works.
Okay I get it now...you want to know the designer's hair color and favorite boy band before you'll acknowledge the design inference, but when it comes to the system itself, 'how' and 'why' it works is discounted as being irrelevant to the conversation. Check. You are an outstanding empiricist. Upright BiPed
Give me an example of a specific structure that, new since the Cambrian, that requires some special explanation. Petrushka
It is not my fault that ID proponents refuse to speculate on the attributes of the designer.
So if I speculated that would be better? Why? Speculation + $.99 gets a you small fries and the theory of evolution. ScottAndrews2
tell me exactly what is required for evolution to work that has not been observed or experimentally confirmed.
Now you're asking me to formulate your hypothesis for you. This isn't my idea. Why don't you tell me how this intelligent evolution designs biological innovation designs things. An example would be especially helpful. To call evolution "intelligent" is just another way of begging the question. I have no reason to think that it is intelligent or does what an intelligent agent can. ScottAndrews2
The question being discussed is not the origin of the system but whether it works. Petrushka
One issue at a time.
lol Yeah, let us not talk about the physical evidence that allows the whole thing to operate. Upright BiPed
I don't know where I said that I cannot support my position until ID supports its position. I thought this was an ID site. I don't subscribe to ID, so what's my position got to do with it? Timbo
How I (and every other human) designs something is not the question. Taking the gratin example, I think of the end goal, work out how to achieve it, and implement the steps. But as Petrushka keeps pointing out, that approach is not possible in biology because it is not physically possible to maintain the database of combinations of what works and what doesn't. But in any case, as I said, how I design something is not the question. Unless you think the designer creates life by assembling ingredients and mixing them together? Timbo
One issue at a time. That is why I stipulated that evolution can account for the changes in populations since the Cambrian. Petrushka
2. I assert that the designer is evolution as described by mainstream biology.
Where has mainstream biology "observed and experimentally confirmed" the rise of the required formalities as they are demonstrated to exist in the storage and transfer of genetic information? Upright BiPed
That’s false. It isn’t that we refuse to. It is that it is irrelevant to the design inference.
Well let's examine that line of reasoning: 1. We agree that living things are designed. 2. I assert that the designer is evolution as described by mainstream biology. 3. You assert that the designer is some other entity. 4. I assert that evolution has been observed and has been tested experimentally and is capable of making the changes in populations necessary to account at least for the history of life since the Cambrian. 5. You assert that your designer ... (well what do you assert about your designer?) See the problem. Two claims about the origin of design, only on of which has any testable attributes. When you claim evolution is insufficient you are making a claim about evolution as a designer. I accept your claim as relevant and in need of discussion. I claim you have no alternative candidate and have not put a team on the field. I will make an additional claim, and that is that when you look at the claims of dFSCI in coding sequences, it is impossible for any finite designer to produce such coding sequences without using some version or form of evolution. Petrushka
It is not my fault that ID proponents refuse to speculate on the attributes of the designer.
ID proponents do not formally speculate on the designer because the physical evidence does not allow them to do so. In other words, they do not speculate past the point of having evidence to back up those speculations. You in turn fault them for following proper empirical practices, yet, you ignore the evidence that is there for all to see.
Evolution is a kind of intelligent design. That is true. It is a system that learns from feedback. That is a kind of intelligence.
According to you, the float valve in my toilet is intelligent.
Read Shapiro or Koonin (books recommended right here on UD) and tell me exactly what is required for evolution to work that has not been observed or experimentally confirmed.
What has not been confirmed? The establishment of the formalities required for the storage and transfer of genetic information, arising by purely unguided material processes. Upright BiPed
Petrushka:
It is not my fault that ID proponents refuse to speculate on the attributes of the designer.
That's false. It isn't that we refuse to. It is that it is irrelevant to the design inference. The way to "know" the designer is through the design.
Evolution is a kind of intelligent design.
Well Intelligent Design evolution and front-loaded evolution are, but blind watchmaker evolution just breaks things. Joe
This is getting old. Invisible? No attributes? How can anything have no attributes? You’re really stretching it to make it sound absurd.
It is not my fault that ID proponents refuse to speculate on the attributes of the designer. Evolution is a kind of intelligent design. That is true. It is a system that learns from feedback. That is a kind of intelligence. Read Shapiro or Koonin (books recommended right here on UD) and tell me exactly what is required for evolution to work that has not been observed or experimentally confirmed. Petrushka
Petrushka, This is getting old. Invisible? No attributes? How can anything have no attributes? You're really stretching it to make it sound absurd. I don't need to do that. I can point out seafaring monkeys and it sounds stupid with no embellishment. Besides, your theory is identical to ID except without the I or the D. How is that better? No one cares how many mechanisms or processes you rattle off if you can't actually apply any of them or show that they do what you say they do. Take away all of the fluff with no concrete implementation and there's nothing left at all, not even an invisible designer with no attributes. Once upon a time something happened and then something else happened. Poof! ScottAndrews2
This can change the perception of science in the heads of millions of people.
Yes, one certainly struggles uphill to promote the perception that an invisible agent having no attributes and no observed instances of action needs to be taken seriously. Can you name any science other than biology that invokes invisible intelligent agents as causes? Any incident in the history of science in which such a hypothesis was confirmed? Petrushka
So what is Petrushka simply taking for granted?
Matter cannot learn by itself...
Any system that incorporates reproduction, fecundity, heritable variation and selection learns.
Reproduction and Fecundity requires functional organization to already exist. Heritable variation requires the establishment of formalities in order to transfer information from parent to daughter. Selection is simply the end result of these requirements being in place. - - - - - - - - - In other words, Petrushka simply takes the entire phenomena for granted, and then pretends it doesn't matter. Upright BiPed
Joe, I think I understand what Petrushka is saying. E.g. it is possible to create composite materials that 'remember' their state. This property of matter is actually exploited in space exploration. As far as I know one such use is spacecraft aerials unfolding under certain conditions almost to their previous twisted shape. However this does not remove the principal hurdle for materialism, the origin of control. It is intelligence that can exploit those properties of matter. And we cannot take it out of the equation. Eugene S
Both are explanatory. The comparison with darwinism is what? Again, uncertainty, contradiction, and revision are often attributes of valid theories. You seem to think they are the criteria. The reason GR and quantum theory aren't footnotes is because they both hold up under the weight of scrutiny and are subject to falsification. Darwinism is not. The transitions aren't gradual as predicted? Scratch that, transitions are punctuated. Darwin's own lenient standard for falsification, the inability to even imagine variations in incremental steps, is met 1,000 times over? Let's just put that on the back burner and come back to it later. Forget about traversing fitness landscapes - let's talk literal landscapes. Monkeys can't traverse the landscape because there's an ocean in the middle of it? No problem - Kong Tiki and the Mrs. just catch a floating branch to the next continent. Over and over and over. And then we're supposed to believe that if they can fudge this, we can trust them when it comes to the abstract concept of fitness landscapes? Once Curious George gets his own sub-theory, anything goes. That you can and do always make up something new to keep the theory afloat is not a strength. It demonstrates a commitment to the theory first and reality second. Confirming evidence is good while contradictory evidence disappears into this week's latest sub-theory. Would you buy a Yugo that goes to the shop every other week and reason that's a good thing because even a Toyota breaks down once in a while? If you did, one might reasonably conclude that owning a Yugo was most important and reliable transportation was second. ScottAndrews2
Your examples are invalid in the absence of intelligence as a means that enables learning. Matter cannot learn by itself (without human intervention).
Any system that incorporates reproduction, fecundity, heritable variation and selection learns. As far as I know everyone here accepts microevolution, so the principle that populations learn is not in dispute. ID proponents seem to believe that a system that can learn one thing is somehow blocked from learning another. Behe's Edge, whatever that means. Petrushka
Strings- GR and Q are OK wrt strings... Joe
You are again alluding to something like vitalism. No, you missed again. All I am saying is that physics and chemistry are simply not enough (mathematically, if you like) to explain the phenomenon of life simply because life is about fine tuned control. Physics and chemistry provide the material substrate while life remains irreducible to either or both, sort of like NP-complete problems are (suspected to be) polynomially irreducible to problems in the complexity class P. Hence the impotence of chance/necessity as explanatory means of the origin of control. Your examples are invalid in the absence of intelligence as a means that enables learning. Matter cannot learn by itself (without human intervention). Of course, I am not speaking about violations of the 2nd law. But such a process as life emerging spontaneously is extremely highly unlikely. There is no evidence whatsoever that can support the counter-intuitive assertion that matter without intelligence is capable of being the origin of control. Eugene S
Actually General Relativity and Quantum theory are absolutely contradictory at the particle level. Does that mean astronomers should hold off in describing the orbits of planets? Petrushka
What's interesting, Timbo, is that your argument requires some pretty big leaps. First, because I'm convinced that design can be detected apart from its implementation or process, that I have no interest in that implementation. Is that really logical? That I don't quit my job and go back to school for biology seems to reinforce it. Here's the smokescreen, and I don't even think you realize the diversion you're creating. Read back and notice that while I'm attempting to evaluate the evidence for design and for darwinism, you are only attempting to evaluate me. (Not that I've never done it.) Perhaps without even meaning to you're trying to change the subject from a comparative look at the evidence to me and what I do or don't want to know. But that doesn't matter. What if you're right and all I want to know is whether life was designed and I don't care one bit how? How would that make a difference or change the underlying evidence. It's valid to examine the other person when they show signs of bias, perhaps asserting logic and applying it selectively. But this tangent of telling me what does or doesn't interest me leads nowhere. ScottAndrews2
Science requires revisions and reexamination of evidence. But it doesn't follow that revising and altering something is what makes it good science. Physics confirms numerous predictions while leaving unanswered questions and maybe a few contradictions. Darwinism opens its gaping maw to swallow every contradiction until confirming the theory is more important than explaining anything. No longer the theory of X, it has become the vague, ineffectual theory of 'not Y.' Physics and darwinism both get revised. But physics flies you to the moon while darwinism exists only to confirm darwinism. ScottAndrews2
That's what I was asking for. ScottAndrews2
Geez my baseball is a material object- do you think it can learn? Joe
Petrushka, Your continued equivocation is nauseating. The point of ID is that "evolution" is directed- ie it has a goal/ target. That said there still isn't any evidence that the immune system evolved via accumulations of random variations- there isn't any way to even test that premise. So when you say "evolution" you need to be more specific as there are several types- for example there is blind watchmaker evolution, intelligent design evolution and front-loaded evolution. So enough with your equivocating and it is time you buy a vowel. Thanks. Joe
Inanimate matter cannot spontaneously generate meaning. Inanimate matter can be used as a substrate to carry semantics by something/somebody imparting meaning to it.
I'm not sure what you mean by inanimate matter. Is this a reference to vitalism? Matter itself has proved too complex to understand completely, so I'm not sure why "materialism" is considered a disparaging word. But material objects can certainly learn. Computerized robots can explore an environment and learn to traverse it. Learning doesn't violate any laws of entropy. Nor does evolution, which is a form of learning. Petrushka
We have a wonderful model of intelligent selection in the immune system.
You and I have somewhat different definition of intelligent selection. I think of intelligent selection as synonymous withe selective breeding -- something people have been doing for thousands of years with plants and animals. The breeder has no control over what variations occur, just which ones get to pass on their genes. It's interesting that you discuss the immune system. Shapiro devotes a lot of time to it. I'm sure that everyone at UD too advantage of the free book offer when it was announced here. What more could anyone in the ID movement want than a free major book by a mainstream scientist who has struggled for years under accusations of being ID friendly. Shapiro doesn't come out and say the immune system itself is designed. Nor does he assert that the variants produced by it are targeted in the sense that they anticipate which variant will be useful. He just notes that in time of need the production of variants is ramped up, increasing the likelihood of finding something useful. He asserts that evolution itself works this way, and that stress triggers increases rates of mutation production -- particularly production of large genomic mutations, such as duplications and transpositions. He doesn't claim that any of these anticipate specific need, but that increasing their frequency increases the probability that something useful will turn up. He, along with most mainstream biologists, note that most of this kind of evolution happens in microbes. Which is why most protein domains have originated in microbes. Petrushka
That conclusion, with all of its unanswered questions, is preferable to a theory that endlessly adds new mechanisms, never applies them in any specific manner to what it proposes to explain...
Sort of like physics. Actually sort of like all sciences. Particularly like gravity, which has had all kind of do-dads added on over the centuries without explaining anything about its ultimate source. Despite Einstein's efforts, the equations still don't work at all scales, indicating the theory is incomplete or flawed. So do we conclude Intelligent Falling? Do we conclude, as Newton was tempted to do, that demiurges put the planets into their orbits and keep them there? I'm really curious if you think this is how science should work? Petrushka
Petrushka, I think it was you who corrected me earlier on when we were talking about cryptography by saying that specificity depended on meaning. That was a very good point, for which I am grateful. So meaning should come first. What brings it about in neo-darwinism? IOW, why should I care about GAs producing 10 letter words when they are designed to produce those given the language, the alphabet and the semantics? Yes, they do produce 10-letter words, so what? The most important thing is the meaning and that is taken for granted. Inanimate matter cannot spontaneously generate meaning. Inanimate matter can be used as a substrate to carry semantics by something/somebody imparting meaning to it. In no experiments up to date, did they demonstrate that organisation per se (whereby complex structure would appear along with control) emerged spontaneously. Eugene S
Something funny on the subject of ID inference and possible practical implications. I was driving yesterday with one of my sons in the car. To our left on the grass, we saw an overturned ad poster. It was fine but quite windy so I immediately hypothesised that it might have been the wind that overturned it. My son said, yes but there was another identical one standing upright next to it, so, he said, it might as well have been done on purpose. I retorted by further supposing that it still might have been the wind, because the one that remained upright was probably in a more stable position than the first initially (maybe the patch of ground had a bit less of a slope). Then my son said that it was exactly the opposite and the one which was standing upright was in fact put on a visible slope whereas the other one was lying in a level place. I gave it up and said, a nice example of inference to intelligent agency. Of course, in this case we were far from exhausting all possibilities for natural cause, but I think the owner of a place may be better off installing a cctv camera, regardless of who did it and how they did it. Eugene S
Timbo:
I can’t say life wasn’t designed until a model of how that process might have worked is put up.
Evos say the stupidest things. Timbo sez he cannot support his position until ID supports its position. Timbo doesn't just have an empty plate, it appears it also has an empty head... Joe
Timbo, It appears that you are talking to yourself as your position is the empty plate that cannot muster a testable hypothesis. And obviously that bothers you... Joe
If I may cut in, it is not a boring question at all, especially in view of the "only right" naturalistic materialism. This can change the perception of science in the heads of millions of people. That I cannot describe as boring. It is terribly important. Another reason why I don't think it's boring is given two conflicting explanations (spontaneous origin vs. purposeful design) finding out which one's right is actually very interesting in its own right as well as in terms of its possible ramifications (philosophy of science e.g.). Eugene S
Scott,
I’d love to hear how or why detecting design would bring research to a screeching halt. It sounds clever until you scratch your head and realizes that it has no basis in reality and in fact nullifies several other fields of science.
Could you develop this a bit. Eugene S
Timbo,
I’m not attempting to reframe any questions or to refute ID. At the moment there is nothing to refute. I can’t say life wasn’t designed until a model of how that process might have worked is put up.
How do you model design? Tell me something you've designed, and then model the mental activity of designing it. If you can't then you're making this up as you go along. If you can then you've just written the hypothesis. ScottAndrews2
I'm not attempting to reframe any questions or to refute ID. At the moment there is nothing to refute. I can't say life wasn't designed until a model of how that process might have worked is put up. You put down of darwinism ironically still contains far more scope for investigation and discussion than ID does. Problem is, you have an empty plate and no apparent hunger. Timbo
You're correct, if I were claiming to offer a mechanical narrative of the manufacture of life. Having failed to refute ID, you keep attempting to reframe the question. Refute what it does state if you can rather than shifting the question. Exactly what gold standard does either abiogenesis or darwinism hold up in the hypothesis department anyway? Last I checked the hypothesis was 'a little of something like this and a little of something like that and something got selected and something drifted (we're not sure exactly which) and some other mechanism we haven't thought of yet or all of them together or any combination of the above.' Test that if you can. But, now that you mention it, empty plate + hunger > heaping crap platter. ScottAndrews2
So you are not curious enough to come up with a hypothesis? You've got an empty plate, not a 1oz steak. Timbo
Timbo, It is a bit strange that you talk about hypotheses when your position cannot even muster one. Joe
Very well. My opinion is that matching attributes of a thing of unknown origin to the observed signature of design in countless artifacts of known origin is not necessarily the same endeavor as determining the mechanism of manufacture. You can invalidate a conclusion by showing its logical fallacies or contradictory evidence. That it leaves unanswered questions does not invalidate it. You're skipping past the evidence leading to conclusion to the unanswered questions. ID is not reverse engineering. Rather, it tells you what can be reverse engineered. That conclusion, with all of its unanswered questions, is preferable to a theory that endlessly adds new mechanisms, never applies them in any specific manner to what it proposes to explain, and is also preposterous and without any precedent. 1oz steak + hunger > heaping crap platter. Not trying to be rude, just trying to clearly isolate and illustrate the point. Plus, once you eat the tiny steak you can still look for some more nutrition. It's not the end. The #2 option is just more filling. ScottAndrews2
I'm interested in your opinion. Timbo
I pity the frustrated soul who wrote the FAQ that no one reads. ScottAndrews2
An explanation would be nice, let alone a more detailed one. Science works by making a hypothesis and then modelling that. Can you even come up with a hypothesis that can then be modelled and tested? Timbo
Petrushka,
The question of whether evolution is sufficient is purely a question of the attributes of the landscape.
I'll go with that.
So it’s between an explanation that is sufficient depending on the landscape (and the landscape is amenable to being studied), and an explanation that is nothing at all and has no prospects for research.
Again, this bizarre reasoning that the potential for future research is better than accuracy. I'd want to know the truth even if that meant no prospects for research. I'd love to hear how or why detecting design would bring research to a screeching halt. It sounds clever until you scratch your head and realizes that it has no basis in reality and in fact nullifies several other fields of science. Looking for a natural explanation if its existence is not a historical reality does provide infinite potential for research. It's like looking for the wreck of a ship that never existed. As long as the funding holds up the search never ends. Except for the funding part, explain again how that's better? ScottAndrews2
Timbo,
In my view, whether something is designed or not is ultimately a rather boring question. I need to know how and why. Don’t you?
I won't hold my breath for "Intelligent Design - The Motion Picture" either. A more detailed explanation would be nice. I'd like that too. But an accurate observation with no explanation is better than a really bad explanation. Sort of like knowing that arsenic is bad for you without knowing why is better than ten inaccurate reasons why it's good for you. Let me boil that down a little more: accurate is better than inaccurate. More is not always better. I'd rather eat a one-ounce steak than a heaping platter of crap, even if it leaves me hungry. How do you not see that? ScottAndrews2
Please, share with us the details of the largest landscape traversed. Otherwise all the abstract terminology in the world amounts to nothing. That is what it is, by the way, it's purely abstract, floating ideas with no concrete implementation. ScottAndrews2
ScottAndrews2, accepting that your first sentence is correct, do you actually have an alternative explanation? Because this is a site about an alternative explanation I thought, but for some reason most of the posts are actually engaged in what you complain about: that is attacking another explanation. You yourself, put a lot of time into attacking evolution. You do not believe the evolution that is observed acting today is capable of producing the biological diversity we see in the world. You think that evolution can only produce "insignificant" changes (I think that is the word you have used, but forgive me if I am wrong). But I don't believe you have ever laid out your explanation. How do you think the significant changes are carried out? Saying they were "designed" doesn't explain how. If I said to my friend, "how did you make that beautiful gratin?" and he said "I designed it", I would be none the wiser. If he said that he took some potatoes and par-boiled them, then sliced then very thin, and that placed them in a layer with just the sides overlapping, then placed some knobs of butter on them and then another layer of potatoes and then some cream and then baked it at 450 for 10 minutes then turned it down to 350 for 40 minutes, I would have useful information. In my view, whether something is designed or not is ultimately a rather boring question. I need to know how and why. Don't you? As I understand Petrushka, it is impossible to design a protein that works without evolving it. This is because there are so many different combinations that are possible and no way of knowing which ones are good beforehand. So I would just like to know if you have any thoughts on how the designer might have done it? Or do you believe the designer is omniscient? Timbo
OK, the characteristics of the landscape determines whether evolution can traverse it. That's a given. the argument is not about whether evolution works, but about whether the landscape supports incremental change. We've known that since around 1940, when the landscape metaphor was first described. All the other arguments about probability are rubbish. Petrushka
Umm your GA has a goal if "It is driven by the landscape of pronounceability,". If it was NOT so driven it would never produce any 10 letter words. Joe
My program demonstrates the effects of connectability in landscapes, because it runs in several different languages with exactly the same code. The only difference is in how easy it is to move around with small steps. Some languages have spelling rules that make words and pronounceable syllables close together, and other languages are sparse. So the program models smooth and rugged landscapes. Petrushka
Cannot be built incrementally via blind and undirected chemical processes. If you cannot grasp that then you do not belong here. Joe
Having coded GAs that do something at least one ID denizen said was impossible -- produce 10 letter words approximately a billion times faster than a random search -- I have some basis for arguing that your central point is false. The question of whether evolution is sufficient is purely a question of the attributes of the landscape. and that landscape is being explored by researchers like Lenski and Thornton. So it's between an explanation that is sufficient depending on the landscape (and the landscape is amenable to being studied), and an explanation that is nothing at all and has no prospects for research. Your best guy in this debate openly admits that directed evolution may be required in order to design. That says something. I'll point out that my program has no target and will just as frequently create pronounceable non-words as it will produce dictionary words. It is driven by the landscape of pronounceability, not a dictionary. It does not produce the same output from the same starting point in multiple trials. Petrushka
In what reality is an alternate explanation even required in order to dismiss a poor, inadequate one? How odd. Even without the faintest clue of how the functions of proteins could be encoded in DNA, the observation that it is in fact function encoded in symbolic language cannot be poofed away. So we don't know how to encode it? Okay, we don't know. From that rock-solid evidence of intelligent agency you make the leap to A) we don't know to encode it so it must be impossible, and then B) having illogically ruled out intelligent agency, spontaneous self-organization is the answer requires stalwart allegiance to an ideology. This is especially noteworthy because while you rule out one because the mechanism can't be explained, you're perfectly content to wait for a check in the mail from the other. When someone asserts principles of logic and then applies them selectively, even if that means discarding highly relevant evidence, it gives away that the underlying motivation is anything but an intent to follow the evidence where it leads. ScottAndrews2
There must not be a replication function. Replicators must replicate, really replicate. If they replicate better, because they have create better functional information to replicate, they will. No need for “points” or enything measured.
That is utterly confusing, I described an agent with a neural network that allows it to interact with the environment, and with a system that allows it to replicate at a rate that is variable and dependent on how it interacts with the environment. Now you tell me that it can't include a function - a part of its coding - that allows it to replicate, it must 'really' replicate'? Earlier you said this:
The replicators can be programmed any way that is considered appropriate, and the rules of random variations in them can be set as appropriate.
Just so long as it isn't programmed to replicate !!! I described something that was within the constraints you imposed, now you object. Why is that? Now you said this about an operating system:
It is designed, but not to demonstrate anything about NS. That’s enough.
Yes - it is designed to manage access to system resources and prevent - by design - software from copying its self and consuming resources in an unrestricted way. An OS is designed, in part, to prevent software doing the things you want to test for. I guess this is why you are so keen to argue for it as a good environment eh! ;) I said this, and you replied:
We are talking about how to model biology and an OS has very little in common with the natural environments we actually observe – it is not a good analog for the thing we are trying to model
It is impossible to model biology. That is out of question. Modeling biology would mean to model protein space and biological reproduction, metabolism, and so on. It’s simply impossible.
You are getting very confused I think! We are talking about environments, the conversation was about whether a computer operating system was a good analog of real world environments. We were not discussing creating a model of a replicator that matched biological replicators exactly, as you yourself suggested here:
The replicators can be programmed any way that is considered appropriate, and the rules of random variations in them can be set as appropriate.
What matters here is that we use an environment that is a good analog of the environment that real replicators exist in - we are testing the idea that replication with variety in nature can generate fsci because that variety leads to different replication rates. Just picking an environment that was designed for something totally different is not enough because you may be picking an environment that excludes features that are found in real environments, and which contribute to the generation of fsci. It would be like testing an hypothesis about the development of wings by picking an environment where aerodynamics and gravity have no analog. Wings would not develop, but that would not prove that wings cannot develop in nature, just that you picked an environment in which wings could not develop, or indeed one where the idea of 'flight' was meaningless.
Why? Nothing can be biased by design if those who designed it were not aware of the purpose (in this case the development of new functions). That’s why the environment must be blind to the experiment. Exactly my point.
I agree that it can be biased by intent but a 'blind' environment can still be biased by design - like a well designed OS it can be designed to prevent unrestricted copying and consumption of memory and compute cycles - OS designers are not blind to replication, computer viruses are a big problem. Look at it this way, you want to test something about rabbits breeding, so you pick a 'blind' environment. After dropping the rabbits into the middle of the pacific and observing them drown you conclude that rabbits cannot breed.
In general, all forms of blinding have the purpose to avoid that a cognitive bias in the experimenters can alter the results.
blinding is used to prevent accidental communication of experimental intent between an experimenter and a subject. I don't think you can apply this concept to the way computer models are constructed - you are blinding yourself to aspects of the system you are actually trying to understand. In this instance one thing that needs to be understood is if there are any properties of the environment that biological replicators exist in that contribute to the generation of fsci. Picking a test environment that lacks these features (because you are blind to them) and concluding that natural selection cannot generate fsci would be to pull the wool over your own eyes - just as dropping rabbits into the pacific does not demonstrate that rabbits cannot breed. GCUGreyArea
Since gpuccio claims that protein domain coding sequences are irreducible and cannot be built incrementally, I eagerly await the demonstration of an alternative method. Petrushka
I eagerly await any design advocate who proposes a method of biological design that does not include variation, fecundity and selection. Particularly since this is how pharmaceutical engineering is done. An approach that could produce de novo protein domains from first principles rather than using evolution would be most welcome. Petrushka
Oh My, then I am happy to await GP's response - for that is an argument in which you are surely to prevail. ;) Upright BiPed
Firstly, instead of trying to toss incoherent insults at me
I wasn't addressing you. If I had been addressing you I would have posted under your post, and it would have been indented under your post. Petrushka
Petrushka: I suppose the Designer likes His games played on a level field. Why not? Or there is more than one designer. There are many reasonable possibilities. gpuccio
Petrushka,
In the software industry your design theory would be known as vaporware.
Firstly, instead of trying to toss incoherent insults at me, its is the evidence you need to address. This is Empiricism 101. When the evidence itself indicates that your objections are based upon formalities which you just take for granted, then it is the evidence you must address - otherwise, your objections simply fail. Do you understand? Secondly, I did not give you a "design theory". I made the evidence-based argument that the transfer of recorded information from the genome is semiotic, just like every other observed transfer of recorded information. That transfer has specific physical dynamics. Those dynamics are coherently understood, and that understanding creates a short list of four physical requirements (entailments) which are observed in such transfer - without exception. Each of those entailments are satisfied in the obervation of genetic information transfer, which consequently confirms its semiotic state. I have asked you to attack those physical dynamics in earnest, but you have chosen to simply continue taking them for granted, and you maintain your objections as if such evidence didn't matter. In fact, you said as much explicitly. In the presence of such a disregard for observable evidence, I am not sure what else can be said. You seem to be operating under the belief that being non-responsive to physical evidence is a valid form of empirical methodology. It isn't. Upright BiPed
suppose I’m most curious why the Designer gave disease causing organisms the ability to evolve around and even subvert the immune system.
Perhaps that is due to generations of random effects on our immune system. Joe
I suppose I'm most curious why the Designer gave disease causing organisms the ability to evolve around and even subvert the immune system. If anything, AIDS is a marvel of design. I suppose the Designer likes His games played on a level field. Petrushka
Hi, UB, You are right, the modest potentials of natural selection are however the consequence and result of the existing replication process, that implies a lot of already existing information: not too much in the case of computer viruses, a huge amount for biological replicators. And yet, even so, that limited power of NS cannot generate any significant new functional information, least of all true dFSCI. If my proposed simulation were accomplished, I am really sure that no results could be observed: the computer replicators would remain simple computer replicators, and would develop no new functions, certainly no complex ones. I am sure of that. Our darwinist interlocutors, IMO, are as sure as I am of the same thing. That's why they try in all possible ways to dismiss my model for testing the powers of NS, and seek refuge in their ad hoc, self-serving, useless GAs, whose results, although extremely modest, are nothing else than wonderful examples of intelligent design. gpuccio
Petrushka: We have a wonderful model of intelligent selection in the immune system. Here the designer has implemented a very efficient algorithm to model high affinity antibodies from low affinity ones after the primary immune response, using somatic hypermutation limited to the complementarity regions (targeted RV), and then intelligent selection based on measurement of the affinity of the modified antibodies for the epitope presented in the antigen presenting cells and expansion of the more efficient clones. In that way, highly specific molecules are created by a process of targeted RV and intelligent selection, exactly a bottom up protein engineering. That amazing process is incorporated in the highly intelligent, obviously designed structure of the immune system, and no previous knowledge of the antigen is necessary. That's how an intelligent algorithm can model a molecule on a previously unknown molecule. gpuccio
In the software industry your design theory would be known as vaporware. Perhaps you can illuminate your theory of intelligent selection. How, for example, would the Designer optimize the host/parasite relationship in Leishmaniasis? Or malaria? Petrushka
Hi GP, As is normal in this conversation, you are correct. When Petrushka say's "The thing about natural selection is that it reduces all these dimensions to a single variable, and that is differential reproductive success", he/she admits by proxy that they are willing to believe that the onset of the (observed) formalities required for life processes, were themselves a result of life processes. This deformity is logic is not tied to physical evidence, but to ideology instead. (ie: natural selection and "differential reproductive success" are the end result of a system of formalities instantiated (and observed) in the storage and translation of genetic information, which must exist prior to the existence of either natural selection or differential reproductive sucess). In other words, something comes from nothing. This is the materialist's "poof" annointed into respectability by Charles Darwin, and so well armoured and protected by the today's academy - directly in the face of physical evidence to the contrary. Upright BiPed
Petrushka: I will be clear. I don't mean impossible in principle. I mean impossible at present. Clear? Regarding your other points, repeated for the nth time, I have answered many times. Design can use both top down strategies and bottom up strategies. Including intelligent selection. I certainly cannot say, at present, what role each of these strategies had in the design of biological information. That point is certainly open to empirical investigation. You say: The thing about natural selection is that it reduces all these dimensions to a single variable, and that is differential reproductive success. I could not agree more. That's why the theory is wrong and essentially stupid. gpuccio
it is obviously impossible, therefore, to include a modeling of the protein space in a GA. Maybe some time we will be able to do that, but certainly not now.
I think you need to get clear on whether modelling is impossible or merely very hard. It has always been possible in principle to compute hidden lines in 3D graphical renderings, just very hard. It has always been rather simple to compute individual points. What is not simple is computing the utility of coding and regulatory sequences. Computing protein folding is very hard, but that doesn't even get you to first base in knowing about utility. Toss in regulatory networks and ecological variables and things get exponentially more complex. The thing about natural selection is that it reduces all these dimensions to a single variable, and that is differential reproductive success. I'd like to see a theory of design that doesn't incorporate fecundity and selection. Something like the theory of ray tracing that enables 3D rendering (even if you don't have the computational power to do it in real time). Let's see a thought experiment in which you establish rules for steering sequences toward utility. Without selection. Petrushka
Petrushka: Your usual nonsense. I have clearly stated that it is possible to engineer proteins top down, even if at present it ia a very hard tasl to accomplish with our computational resources. This is a fact. it is obviously impossible, therefore, to include a modeling of the protein space in a GA. Maybe some time we will be able to do that, but certainly not now. By the way, modeling the proten space for a GA seems a huge task compared to just engineering new functional proteins. My point was very simple: existing GAs are never modeling biology. It's impossible, at the present state of our knowledge and resources, even begin to think of modeling biology to verify the powers of RV + NS. On the contrary, a pure experiment to test the powers of RV + NS in a non biological, computer based system, can be done. As I have proposed. gpuccio
It's doubly amusing that when I discussed the "impossibility" of design without employing evolution, I was lectured about how many computational problems were once considered impossible but are now routine. Petrushka
It is impossible to model biology. That is out of question. Modeling biology would mean to model protein space and biological reproduction, metabolism, and so on. It’s simply impossible.
Odd. When I said that it is impossible to predict the utility of a coding or regulatory sequence without testing it in a living thing, I was jumped on, even by you. So what is your theory of design? How does the designer produce large chunks of dFSCI or whatever? Can you give an example of the process used by the designer (assuming the designer isn't omniscient). Petrushka
GCUGreyArea: Thank you for your serious and thoughtful comments. A few brief answers. First, I am a bit confused as to why you would consider a computer operating system a ‘natural’ environment. It is a designed system that imposes some very deliberate constraints on software, for example limitations on resource access and consumption, and a good operating system would be designed to prevent or severely limit some aspects of a program, for example replication. It is designed, but not to demonstrate anything about NS. That's enough. We are talking about how to model biology and an OS has very little in common with the natural environments we actually observe – it is not a good analog for the thing we are trying to model It is impossible to model biology. That is out of question. Modeling biology would mean to model protein space and biological reproduction, metabolism, and so on. It's simply impossible. What we can model is the general principle that RV + NS can create complex functionalities in replicators. We take software replicators in a software environment, and in no way we try to model biology. We test the assumption that replicators + RV + NS is a powerful engine to generate functional information (an assumtpion, IMO, completely wrong). there is a danger that it does exactly what you caution against, it is biased by design against the undirected development of new functions. Why? Nothing can be biased by design if those who designed it were not aware of the purpose (in this case the development of new functions). That's why the environment must be blind to the experiment. Exactly my point. The if implies that you have not done the experiment and so the way the statement is phrased implies that you have assumed your conclusions. The statement would be better if put as an hypothesis. Hey, I was just expressing my expectation. Just a human touch :) What do you mean by generic formal properties? The general formal laws that govern information, function, variation, randomness and replication, be it biological or software. if we are talking about biology then we would, at the very least, want sources of energy that can be utilized and which are distributed non uniformly in a space that also contains harmful stimuli and through which the agents can move if they have that ability. No, we are not talking about biology. As I said, it's absolutely impossible to model biology on a computer, at least at this level. We cannot model protein space (we know too little of it, and the computational resources anyway should be incredibly huge). And the same is true for the rest of biology. If that environment is uniform then we have designed in a low global maxima – why develop any complexity when everything is the same? The environment needs to reflect what we see in nature or it is not a model of nature. I have never said that the environment must be uniform. First of all, a computer environment is not uniform at all. And I quote myself: "If desired, the environment can change in time, to offere diversified contexts to NS. But it is essential that any variation in the environment must be “blind” to the replicators and the experiment, for the same reason as above (avoid the introduction of added information)." If we get rid of the fitness function and instead include with each agent a replication function where they accumulate accuracy points (an anolog of energy) and they can then replicate if they have enough energy – would this qualify? There must not be a replication function. Replicators must replicate, really replicate. If they replicate better, because they have create better functional information to replicate, they will. No need for "points" or enything measured. I’m not sure ‘blind experiment’ means quite what you think it means in this context. In general, all forms of blinding have the purpose to avoid that a cognitive bias in the experimenters can alter the results. Here, that's exactly what the blinding of the environment to the replicators is aimed to achieve. gpuccio
I should add quickly, in the financial example I gave, the simulation was not designed for evolution, it was a 'black box' (in the form of a DLL) supplied by an investment bank who wanted to see if the GA would do any better than their existing trading software (it did, very effectively) GCUGreyArea
GP: Thanks for your reply, I'm afraid I don't have much time to respond in as much detail as I would like. First, I am a bit confused as to why you would consider a computer operating system a 'natural' environment. It is a designed system that imposes some very deliberate constraints on software, for example limitations on resource access and consumption, and a good operating system would be designed to prevent or severely limit some aspects of a program, for example replication. We are talking about how to model biology and an OS has very little in common with the natural environments we actually observe - it is not a good analog for the thing we are trying to model and there is a danger that it does exactly what you caution against, it is biased by design against the undirected development of new functions.
If a correct modeling of that is made, it will be clear that it can’t.
The if implies that you have not done the experiment and so the way the statement is phrased implies that you have assumed your conclusions. The statement would be better if put as an hypothesis.
You can choose an existing operating system, or somebody can design a specific one, but in that case the programing of the system must be blind, and must satisfy only generic formal properties that have nothing to do with the specific replicators that will be used, and with the hypothesis that is to be demonstrated.
What do you mean by generic formal properties? - if we are talking about biology then we would, at the very least, want sources of energy that can be utilized and which are distributed non uniformly in a space that also contains harmful stimuli and through which the agents can move if they have that ability.
The only observed things that have to be captured are those intrinsic to the basic hypothesis: that replicators, in an environment where they can replicate, complex enough and various enough, and subjected to RV in appropriate rates, can and will develop new dFSCI to exploit better the resources in the environment. Nothing else is needed.
If that environment is uniform then we have designed in a low global maxima - why develop any complexity when everything is the same? The environment needs to reflect what we see in nature or it is not a model of nature.
it is clear that each system has its rules and properties, but if they are unrelated to the replicators and to the experiment, they are no more a bias, but only random constraints and conditions. It is the duty of the replicators/RV part to exploit those conditions, if that is possible.
Ok, I know of a project that used a GA to evolve predictors for financial markets. The simulation was designed to simulate financial markets and 'fitness' was defined as the accuracy of the prediction (they evolved neural networks to make the predictions). If we get rid of the fitness function and instead include with each agent a replication function where they accumulate accuracy points (an anolog of energy) and they can then replicate if they have enough energy - would this qualify?
Blinded experiments are the rule in empirical sciences, exactly because cognitive bias is a terrible enemy.
I'm not sure 'blind experiment' means quite what you think it means in this context. GCUGreyArea
"So-called "evolutionary algorithms" (a Self-contradictory nonsense term), if they produce any formal function, are always artificially controlled. Optimization of genetic algorithms is always choice-contingent, and therefore formal rather than physical." Mung
"Adami rightly argues that information must always be about something. "Aboutness" is a common focus of attention in trying to elucidate what makes information intuitive. But aboutness is always abstract, conceptual and formal." "Jablonka rightly argues that Shannon information is insufficient to explain biology. She points to the required interaction between sender and receiver. Jablonka emphasizes both the function of bioinformation and its "aboutness," arguing that semantic information only exists in association with living or designed systems." When folks like Elizabeth Liddle and Allen MacNeill argue that information need not be about anything at all they just display how little they have actually thought upon or read about the subject. So where does 'aboutness' come from? Mung
But you are aware of minds associated with brains? Pray tell, what is this 'mind' you speak of and how is it associated with 'brain'? Please tell us what information is. Else your assertion that you're not aware of any instance of information that is not physically embodied is just rhetoric. And you can't think of any non-material objects that contribute to science? Is that because you can't think of any non-material objects? How about mathematics? Mung
No Kindle edition — aaarrrrgggghhhh!!!!!
"Because this book is also being made available in e-book format (e.g., for Kindles), many awkward internet links have been deliberately left in the text and reference lists." - p. xi Mung
I'm not aware of any minds that are not associated with brains, nor of any instance of information that is not physically embodied. Nor am I aware of any lasting contribution to science that involves non-material objects or causes. Petrushka
Of course not, because you think science deals with only the material. That is why you continue to show confusion over, for example, the concept of information, which is non-material. I'll take your statement and raise you one: Virtually everything of lasting contribution in applied science and engineering has arisen from mind and information, not from purely material and mechanical causes. Eric Anderson
You would discount the contributions of the "mind" to science? While certainly manifest in physical media (brain matter), it isn't settled that the "mind" is itself material. ciphertext
GCUGreyArea: Some clarifications. So if I wanted to test the hypothesis that geographical isolation can lead to speciation (in the sense of one population diverging into two that subsequently loose the ability to interbreed) then I must not specify an environment in which geographical isolation is possible? What I am discussing here is exclusively the model according to which RV + NS in a reproducing population can create new FSCI. If a correct modeling of that is made, it will be clear that it can't. Anyway, if you want to request generic formal properties for the environment (such as sufficient complexity that allows for sub-environments, or anything else that is in n o way related to the replicators and the experiment, that can pe passed as a request to the people who model the environment in an explicit way, so that those requirement can always be analyzed to verify that they are not adding information related to specific functions. Anyway, the programming of the environment must remain blind to the replicators, except for generic formal properties. I’m a little confused when you use the phrase ‘Natural informational environment’ – what does this mean? I mean: we can implement a system based on the concepts of RV and NS in a computer environment. But the computer environment must be a natural computer environment, not a software programmed explicitly to demonstrate our point. Otherwise, we will program the environment to demonstrate our point, and the whole work will be biased. The point is: if RV and NS can generate dFSCI, they can do that in electronic replicators in a computer environment, such as computer viruses ina an operating system, if they do the same in a biological environment, provided we give enough reproductive resources and enough variation. Because the logical and informational problems are similar. IOWs, if mere variation and NS based on intrinsic reproductive ability can generate dfSCI, a computer virus is a good model for the principle: it replicates, it can be subjected to RV, even in different rates, and it can develop functional code that can give advantages in the computer environment where it reproduces. So, an operating system like Windows would be a nwutral computer environment. But it can be any other system, provided it is not built to test NS. Avida, and all other similar softwares, are systems built with a specific purpose, and they include a lot of added information in terms of choices, functions, and so on. They are not "blind environments". It sounds like you are saying that the environment must be picked at random, or maybe that is should be found … No, it just means that it is not prepared ad hoc for the experiment. You can choose an existing operating system, or somebody can design a specific one, but in that case the programing of the system must be blind, and must satisfy only generic formal properties that have nothing to do with the specific replicators that will be used, and with the hypothesis that is to be demonstrated. This is a little tricky when it comes to constructing a model because models are only valid when they capture observed things. The only observed things that have to be captured are those intrinsic to the basic hypothesis: that replicators, in an environment where they can replicate, complex enough and various enough, and subjected to RV in appropriate rates, can and will develop new dFSCI to exploit better the resources in the environment. Nothing else is needed. Fitness functions in GA’s used for biological hypothesis testing are created as models of the result of an organism existing in an environment. Exactly. That's why they are a different thing from NS. They are design. That's why a true model for NS must not include any fitness function. Fitness is a naturally occurring properties. Fitness functions are a designed entity. A virus that replicates in a system is fit, there is no need for a fitness function to tell us that. If another virus is more fit, it will replicate more. Nobody has to measure anything for that to happen. That's why NS is calle natural. Your phrase “the environment has no direct information about the functional result” is also a little odd – how do you formally classify direct as opposed to indirect? If the environment was not created for the replicators and the experiment, it cannot have "direct" information. Direct information is intentional, and it comes from conscious agents who do have information and understanding of what they are doing. I was probably not clear in my terms here, so I will try to make my concepts more clear: Let's say that no intentional information must be present in the environment, both in direct ways (such as in the Weasel model, where the result is already incorporated explicitly in the system), and in indirect ways (for instance, by ad hoc choices of parameters, or of the type of system in connection to the type of replicators). The information in the system can certainly exist in non intentional forms: it is clear that each system has its rules and properties, but if they are unrelated to the replicators and to the experiment, they are no more a bias, but only random constraints and conditions. It is the duty of the replicators/RV part to exploit those conditions, if that is possible. One way to read it is that modelling reproductive success in not a valid way of modelling reproductive success? That's the point. We are not modeling reproductive success. We are modeling RV and NS to see if they can generate reproductive success and complex new functions as a result of it. I will be more clear. We already know that a selectable trait, that is made to expand, will behave in a certain way. We are not interested in that. We knoe that selection works. The problem is: does natural selection work? IOWs, is reproductive advantage a strong enough mechanism to generate complex functions through RV and NS? To determine that, we must implement RV in some form of blind environment in some form of replicators, and observe any reproductive advantage that happens as an output. And analyzed if that reproductive advantage implied the creation of new dFSCI. Is that a simulation? In a sense it is, because we are implementing in a computer system a mechanism that shoud be working in a biological system. But we are not trying in any way to simulate the biological environment (that would really be impossible). We are making an experiment to verify if a proposed logical algorithm can work. If it does not work in a compouter system, even if we try different conditions (always respecting the basic properties I have outlined), there is no reason to think it will work in a biological system. This appears to be an argument that including any form of selection is invalid as a way of modelling selection … I must have misunderstood. No, it was me that again was not clear enough. I rewrite the sentence in a more complete way: "b) The replicators can be programmed any way that is considered appropriate, and the rules of random variations in them can be set as appropriate. Then, exclusively as a conseqeunce of random variation and of spontaneous reproductive advantage in the system, the replicators must develop new complex functional information. The point is simple, after all. No form of selection must be included in the system. If RV generates new function that gives reproductive advantage, differential reproduction will just happen. No artificial form of selection must be included, because we are investigating natural selection. Surely then if the mere act of creating the model, and the fact that models have to be intelligently created, means any model of selection is actually intelligent selection, then it is impossible to model natural selection? No. We can certainly create a model which is blinded in some of its parts, to give the desired result. Blinded experiments are the rule in empirical sciences, exactly because cognitive bias is a terrible enemy. The model can be intelligently created to guarantee that any form of selection in it is natural, and not intelligent, selection. gpuccio
I'm not aware of any lasting contribution to science that requires a non-material cause. Certainly individual scientists have held a wide variety of religious beliefs, including beliefs in a global flood and in special creation. Petrushka
I think your argument is really against the philosophies of science being employed by the three (though you conflate two of the three) various theories. A purely "material" philosophy of science versus a philosophy of science which allows for non-material explanation (as was popular up to late 19th or early 20th century I believe). ciphertext
Natuiral selection is a result- an after-the-fact assessment. If you have differential reproduction due to heritable random variation then you have natural selection. A mouse that has more offspring than other mice still gave birth to mice, and those mice will give rise to other MICE. OTOH artificial selection is a real selection process and can do what NS cannot. However NS can undo what AS has done. That is about all NS can do. Joseph
So if I wanted to test the hypothesis that geographical isolation can lead to speciation (in the sense of one population diverging into two that subsequently loose the ability to interbreed) then I must not specify an environment in which geographical isolation is possible? I'm a little confused when you use the phrase 'Natural informational environment' - what does this mean? It sounds like you are saying that the environment must be picked at random, or maybe that is should be found ... This is a little tricky when it comes to constructing a model because models are only valid when they capture observed things. Fitness functions in GA's used for biological hypothesis testing are created as models of the result of an organism existing in an environment. Your phrase "the environment has no direct information about the functional result" is also a little odd - how do you formally classify direct as opposed to indirect? One way to read it is that modelling reproductive success in not a valid way of modelling reproductive success?
b) The replicators can be programmed any way that is considered appropriate, and the rules of random variations in them can be set as appropriate. Then, exclusively as a conseqeunce of random variation, the replicators must develop new complex functional information.
This appears to be an argument that including any form of selection is invalid as a way of modelling selection ... I must have misunderstood.
we must model NS, not intelligent selection.
Surely then if the mere act of creating the model, and the fact that models have to be intelligently created, means any model of selection is actually intelligent selection, then it is impossible to model natural selection? GCUGreyArea
Petrushka, What if it's not poof but something we have a genuine inability to grasp (uncomputability)?! E.g. mathematics is humble enough to recognise there are an infinite number of mathematical truths that cannot be proven. I think it is time biologists did the same, science will only benefit from it. Besides, I would not put my money on alchemy investigating something that did not exist. Eugene S
And by that "logic" arcaeology is a scribe of the gaps argument and forensic science is a criminal of the gaps argument. Joseph
DrREC:
This falsifies the bulk of the claims of ID about information processes, specifically that language (abstract and arbitrary) and programming could never emerge from randomized inputs plus selection.
You are still confused as ID does not state that. Not only that all of what you posted were DESIGNED. Joseph
Again, there isn't any selecting going on. What part of that don't you understand? Differential reproduction due to heritable random variation occurs, but calling it "selection" is misleading, as Will Provine said: The Origin of Theoretical Population Genetics (University of Chicago Press, 1971), reissued in 2001 by William Provine:
Natural selection does not act on anything, nor does it select (for or against), force, maximize, create, modify, shape, operate, drive, favor, maintain, push, or adjust. Natural selection does nothing….Having natural selection select is nifty because it excuses the necessity of talking about the actual causation of natural selection. Such talk was excusable for Charles Darwin, but inexcusable for evolutionists now. Creationists have discovered our empty “natural selection” language, and the “actions” of natural selection make huge, vulnerable targets. (pp. 199-200)
Joseph
ID always has been a and always will be a god of the gaps argument. What else is new? Petrushka
a) The environment is not set up to produce the functions necessary for reproduction.
In natural selection, fitness is simply a label applied to the observed fact that some replicators have more offspring than others. This is a statistical outcome that can be modeled by statistical methods.
the environment must not include any explicit purposeful measurement of anything. IOWs, it must not include any explicit “fitness function”.
Why not. A fitness function is simply a way of modelling the history of allele dissemination in a population over generations. It is simply a way of modeling the fact that in nature, alleles are diferentially reproduced. Your distinction between natural selection and directed evolution is just silly. A man-made process like directed evolution is just an industrialized natural process. It can certainly produce specified results faster, but the results of natural evolution are not specified in advance. I think that's the place where ID and evolution part company. Petrushka
DrRec, But the rules of the game were programmed in, in any human designed GAs. What you evolutionists need to demonstrate is how the rules of the game themselves emerge, not the trivial resultant locally optimum characteristics. Remember, we are not talking about optimisation at all (which would be trivial). We are talking about functional/non-functional states. That is (with reservations) in essence discrete. Before something can be naturally selected, the something must be functional! Any now about the above reservations. Ok, I know of exaptation but it has become so heavily used by evolutionists disregarding its obvious (and well known) limitations, so exaptation has turned into the "exaptation of the gaps" argument of the sort, 'I can use a broken mousetrap as a catapult'. Eugene S
To Id proponents: Evolutionary theory provides a robust framework for fruitful scientific research. I argue that Creationism, including "intelligent design," has thus far failed to provide an analogously successful framework for fruitful scientific research. Here's a citation from the professional scientific literature. Note that it's freely available online: Zhe Wang, Lihong Yuan, Stephen J. Rossiter, Xueguo Zuo, Binghua Ru, Hui Zhong, Naijian Han, Gareth Jones, Paul D. Jepson and Shuyi Zhang. 2009. Adaptive Evolution of 5?HoxD Genes in the Origin and Diversification of the Cetacean Flipper. Mol Biol Evol (2009) 26(3): 613-622 (http://mbe.oxfordjournals.org/content/26/3/613.full). The paper essentially asks this question: how did evolutionary processes result in the flippers of whales, specifically the skeletal elements? Based upon existing research into genetic causes of extra digits and digit fusion in human hands and mouse forelimbs, the researchers proposed specific genetic mechanisms, coupled with natural selection, for the observed whale flipper skeletal structure. While much of the language is highly technical, the paper's content it is relatively straightforward once we understand the technical language. Additionally, there are straightforward references to natural selection, evolution, and other terms clearly identifying this research as an application of evolutionary theory. This contrasts with intelligent design, where the scientific research allegedly supporting Creationism, including intelligent design, does not clearly place that research within the Creationist/ID framework of ideas—that is, when the research isn't "pubjacked" in the first place. This means checking definitions for technical terms; reading abstracts of the referenced material; looking up genetic sequences in the PubMed databases; checking the sequences for the claimed results; and so forth. Creationism, including intelligent design, apparently doesn't even attempt this level of scientific research. IMO there's a simple reason for this: the allegedly scientific debate about Creationism/ID isn't about science at all, but rather concerns science education in the United States public school system. Creation activists, including vocal ID proponents, spend considerable effort lobbying state legislatures and school boards to their views, with most other effort spent toward trying to convince the general public of their ideas, and effectively no effort to convince the scientific community through original research, such as Wang et al (2009). wateron1
But let’s go to the real problem. You are assuming that the sequences of functional proteins are gradually selectable. They are not. It is very simple. A functional sequence is not deconstructable into simpler steps, each of them naturally selectable. Therefore, you are assuming that the information in a protein sequence can be gradually selected, but it cannot, because it is not selectable until it is complete. In a sense, this is the most fundamental form of irreducible complexity.
It's good to see everyone making a stand, one that can be tested. From my point of view. Such a case would also bar design, except by an omniscient designer. But I will put my money on researchers like Thornton, who are actually investigating the evolutionary history of sequences and attempting to reconstruct evolutionary history. That seems more productive than simply asserting poof. Petrushka
GCUGreyArea: Just out of curiosity, can you describe how to model natural selection properly? Yes. I have discussed that many times. The essential point is that the formal properties that define NS are: a) The environment is not set up to produce the functions nevessary for reproduction. IOWs, the environment has no direct information about the functional result (obviously, the environment can include indirect information, because it provides conditions and constraints). b) Any new information can be selected only if it provides a reproducative advantage in the existing replicators. Therefore, the only way to model NS, for instance in an informational environment, is the following: a) The environment must be a natural infdormational environment. IOWs, it must not be specifically programmed for the experiment. that is the only way to be sure that no added information is directly or indirectly inputted in the environment by the researchers. IOWs, the definition of the environment must be "blind" to the replicators and to the purposes of the experiment. b) The replicators can be programmed any way that is considered appropriate, and the rules of random variations in them can be set as appropriate. Then, exclusively as a conseqeunce of random variation, the replicators must develop new complex functional information. Important points: the environment must not include any explicit purposeful measurement of anything. IOWs, it must not include any explicit "fitness function". Otherwise, we are modeling intelligent selection. For the same reason, no explicit reward must be present for any specific recognized property. The only reward must be the natural reward deriving from the higher replicative ability of the replicators in the given, blind environment. If desired, the environment can change in time, to offere diversified contexts to NS. But it is essential that any variation in the environment must be "blind" to the replicators and the experiment, for the same reason as above (avoid the introduction of added information). The point is always one: we must model NS, not intelligent selection. That IS can produce results we well know: all depends on how much intelligent information is added, directly or indirectly, to the system. gpuccio
GPuccio, "Therefore, you are assuming that the information in a protein sequence can be gradually selected, but it cannot, because it is not selectable until it is complete. In a sense, this is the most fundamental form of irreducible complexity." Could not agree more. In fact, it is a well established mathematical concept stemming from algorithmic information theory. It has analogies elsewhere (so called maximal sets in set theory, resonance in physics, &c). Evolutionists are playing unfairly here (obviously those well informed). Eugene S
First of all, GA are not a model of NS.
Just out of curiosity, can you describe how to model natural selection properly? GCUGreyArea
DrREC: I think you already know what's the problem, but I will sum it up just the same. First of all, GA are not a model of NS. That is a very simple point that no darwinist seems to grasp. But it is true. GAs are an example of design. But I don't really want to discuss that again (I have done that a lot of times, in detail, and I believe there is no hope to have my point even considered). But let's go to the real problem. You are assuming that the sequences of functional proteins are gradually selectable. They are not. It is very simple. A functional sequence is not deconstructable into simpler steps, each of them naturally selectable. Therefore, you are assuming that the information in a protein sequence can be gradually selected, but it cannot, because it is not selectable until it is complete. In a sense, this is the most fundamental form of irreducible complexity. So, NS cannot help in the building of basic protein sequences (I am speaking of fundamental protein domains here, as you should know). Therefore, "precursors" to the functional proteins are not part of what you call the "genetic diversity", because they are not functional. Or, if they are, they are randomly represented and never selected, and we all know how basic probabilistic considerations make it impossible to find functional proteins that way. The same, obviously, applies to "random pools of RNA" and to abiogenesis. Do you really believe that kind of things? Well, I suppose everyone is free to choose his own faith. gpuccio
But the inputs were randomized. This falsifies the bulk of the claims of ID about information processes, specifically that language (abstract and arbitrary) and programming could never emerge from randomized inputs plus selection. A goal wasn't programmed in. DrREC
Selection is fairly easy to demonstrate in the lab, or quantify in the wild. Are you disputing selection (differential survival/reproduction) can occur? DrREC
What "selection"? Natural selection isn't selection at all. Joseph
Of course those things can happen- they were DESIGNED to happen. They sure as heck didn't happen just because. Joseph
"It has no information about the functional sequences of proteins." Right. Neither do some types of artificial selection-a genetic algorithm applies a fitness test, not a matching to expected outcome. "It can only select a functional sequence if and when it is already there. " Right again. We observe genetic diversity in nature, so what's the problem? Applied to abiogenesis, random pools of RNA can be made on minerals. "But it has no way to reach that sequence, or to favour its appearance." Wrong-the mechanism is the differential survival of specimens bearing increasingly "fit" sequences. DrREC
DrREC: Why is selection an insufficient source? Natural selection is not a source at all. It has no information about the functional sequences of proteins. It can only select a functional sequence if and when it is already there. But it has no way to reach that sequence, or to favour its appearance. gpuccio
“Shannon uncertainty cannot progress to becoming [Functional Information] without smuggling in positive information from an external source.” Why is selection an insufficient source? DrREC
Donald E. Johnson seems to be abusing the concept of Shannon Channel Capacity. Applying a telecommunications calculation for the amount of information that can be transferred in a specified bandwidth in the presence of noise to evolution seems really tenuous. Logically, the first code only has to be as complex as the modern code if it contains ALL the information of the modern code-all codons, amino acids, and specificity. Using genetic algorithms, the evolution of genetic codes from simpler or random precursors has been demonstrated: http://www.biomedcentral.com/1471-2105/12/56#B26 More globally, languages, programs, etc. can be evolved. One really needs to consider what has been empirically demonstrated before taking information theory too far. "A Genetic Algorithm is described that can evolve complete programs. Using a variable length, linear, binary genome to govern the mapping of a Backus Naur Form grammar definition to a program, expressions and programs of arbitrary complexity may be evolved." http://sclab.yonsei.ac.kr/courses/09EC/papers/eurogp98.pdf "We analyze a general model of multi-agent communication in which all agents communicate simultaneously to a message board. A genetic algorithm is used to evolve multi-agent languages for the predator agents in a version of the predator-prey pursuit problem. We show that the resulting behavior of the communicating multi-agent system is equivalent to that of a Mealy finite state machine whose states are determined by the agents' usage of the evolved language. Simulations show that the evolution of a communication language improves the performance of the predators." http://www.mitpressjournals.org/doi/abs/10.1162/106454600568861 These things shouldn't happen, right? DrREC
Starbuck, it is NOT gpuccio who is failing to grasp the true, utterly trivial, implications of the paper! The extrapolation you have made is completely unwarranted! bornagain77
It is rather a “rescue” experiment. They start from an existing functional domain. They keep 29 conserved AAs which are essential to function. They randomly mutate the other 28 positions to form three sets of “randomized sh3 libraries”, with three different sets of aminoacids. As a consequence of those random mutations, in almost all cases the function is lost. Finally, they look for the rare functional sequences in those libraries, that is those sequences where the random mutations in the 28 substituted positions are compatible with the original function. That is the structure of the experiment. They don’t “obtain a functional protein”. They have a functional protein in the start, and in rare cases the random mutations are compatible with the original function.
Your grasp of this paper is still very weak. But there are some "moments of clarity" that tells me you get the gist of the paper, except not really sharing the authors` enthusiasm:
modern proteins might be able to be simplified by a set of putative primitive amino acids more easily than by a set of putative new amino acids.
This work is, of course, of not ideal setting to illustrate the idea. But definitely in the right direction. It is enough to observe RESIDUAL functional activity by the replacement of new amino acids by the old ones. After all, evolution has been quite a struggle, and it is hard to guess which of new residues can be replaced without severe damage to the extant function. Look to other papers along similar lines. Starbuck
”I gave an example of a landscape which can be traversed incrementally. The travelling salesman problem. It requires no oracle, simply a comparison among current population members as to which is the shortest route.”
With the TSP, optimality would need to be computed for each potential solution -- with better ones being adopted and worse ones being disregarded. This requires a fitness assessment. So whether you call it an oracle or a comparison, fitness needs to be determined for each generated solution, and a selection needs to be made by comparing the optimality of the current solution with the optimality of the proposed one. material.infantacy
”It depends on the landscape. For many landscapes, such as the travelling salesman problem, genetic algorithms are the only known method for traversing toward a good solution.”
This is incorrect, I believe. I don’t see how GAs would outperform many heuristic and approximation algorithms, nor the ones listed here.
”Such a landscape has no known shortcuts, no formulaic method of solution. It’s incremental steps or random search.”
There are optimal (exact) solution algorithms to the TSP (see above link). The claim that GAs would reliably outperform those listed just doesn’t seem reasonable. Since the TSP time complexity of brute force permutation traversal runs in O(n!), I’m not even sure you could outperform that with a GA, since you would have no way of verifying the optimal solution without traversing the n! permutations. If there is a GA which reliably outperforms other suboptimal (good) solutions such as heuristic methods, approximation, etc., with lower complexity/specificity, I’d be interested in seeing it. material.infantacy
Starbuck: gpuccio I never said anything about obtaining a protein from scratch, that doesn’t even make sense. How could you possibly interpret anything I said in this thread in that manner? The bottom line is that the experiment shows that one can obtain a functional protein, faster, from a limited set of amino acids, from a randomized domain, then one could from a randomized domain with the full set of amino acids. Again, do you have a response to this? You have yet to do so. Axe randomized a large reading frame and the function was lost, so I would think this is an important result. I’m off tomorrow but will be back Monday to see if you actually are able to confront the argument in the paper which the authors themselves say supports the hypothesis of a gradual evolution of the genetic code. Well, I will try to respond in detail. A complete analysis would be quite long, so I will try to state the main points, and then if you want we can go into further details. First of all, I am not interested in what your intentions were when you stated what you stated. I am interested only in clarity. So, I will just restate a fundamental point that is very important, and that you seem not to grasp correctly: The paper is not a paper where new proteins are obtained from random libraries. That must be very clear. It is rather a “rescue” experiment. They start from an existing functional domain. They keep 29 conserved AAs which are essential to function. They randomly mutate the other 28 positions to form three sets of “randomized sh3 libraries”, with three different sets of aminoacids. As a consequence of those random mutations, in almost all cases the function is lost. Finally, they look for the rare functional sequences in those libraries, that is those sequences where the random mutations in the 28 substituted positions are compatible with the original function. That is the structure of the experiment. They don’t “obtain a functional protein”. They have a functional protein in the start, and in rare cases the random mutations are compatible with the original function. That must be very clear. Now, let’s go to the analysis of the paper. I will state the most obvious flaws, synthetically: 1)The premise. The paper elaborates on a model, never well defined, according to which some more primitive AAs would have appeared before others in the genetic code. But it is not really clear what the purpose of the paper is. I quote from the introduction: “Previously, Babajide et al. demonstrated in silico that native-like folded structures of several tested proteins are maintained with a restricted alphabet mainly containing primitive amino acids (Ala, Gly, Leu and Asp) but were not maintained with a set of nonprimitive amino acids (Gln, Leu and Arg) [15]. To test this hypothesis experimentally, we sought to compare the function and structure of tested proteins with different subsets of amino acids for the first time.” Now, it is not clear at all what is the hypothesis that should be “tested experimentally”. For simplicity, I will assume that the hypothesis is to compare the behavior of primitive versus non primitive AAs in maintaining folded structures. But it seems that that has already been demonstrated, so I don’t understand well the premise of the paper. Moreover, one thing is to compare a set of primitive AAs with a set of non primitive ones, another thing is to compare a set of primitive AAs with the whole set of 20 AAs, which strangely becomes at some point the issue in the paper. What is the premise for that? Is the hypothesis that an alphabet of only primitive AAs is more efficient than the present, complete alphabet? And why? I believe that the model is that the new AAs appeared after for evolutionary reasons. So, the “evolved” alphabet should be more efficient than the primitive one. So, I believe that the whole premise of the paper is confused, badly expressed, misleading, and vague. But let’s go on, and let’s see the procedure and the results. 2)The procedure. Here we find the most serious biases. The first, and main, bias, I have already described. They do not work with a whole protein domain. They work with the least functional subset of a small protein domain. Now, why that? If the problem is to understand if primitive AAs are better to maintain function (folding, active site), then the natural choice is to experiment with a whole sequence, or at least with the most functional part of it. It’s the most functional, conserved subset of positions that has the biggest part of functional information. The least functional subset is by definition less restricted. What we want to know is if the primitive AAs fare better in maintaining function. Moreover, this bizarre choice has serious consequences on the meaning of the paper. The mutated sequence is made only of 28 positions. That is scarcely a representative part of the proteome. Even the whole domain, obviously, would not be representative of it, but at least it would contain a functional domain, while that cannot be said of the sequence of 28 AAs which have no relevant part in the formation of the domain function. I will come back to that later. Another important bias is that the two subsets used for mutations are not made respectively of primitive and non primitive AAs. They are defined by the first nucleotide. So, One set is made of 12 AAs, some of them “primitive”, and the second is made of 10 AAs, some of them “non primitive”. (there is an overlapping of two AAs between the two groups). "Primitive" AAs are more represented in the first group, but that’s all. So, using two scarcely separated sets that do not really correspond to primitive and non primitive AAs as though they were representative of the behaviour of primitive and non primitive AAs is a serious cause of possible bias. IOWs, in each group there are both primitive and non primitive AAs, (for instance, Met and Lys in the RNN subset, and Ser and Pro in the YNN subset). The behavior of the two subsets in no way can be equated to the behavior of primitive and non primitive AAs. All that can be easily seen in fig. 1 of the paper. But there is more. I have checked the two subsets of the sequence, let’s call them the “conserved” and the “randomized”, the first made of 29 AAs and the second of 28. And their composition in the wildtype, in terms of RNN and YNN AAs, is completely different. Here are the details: The “conserved” sequence of 29 AAs is made of 15 RNN AAs and 11 YNN. The percentage of YNN AAs (the “non primitive”) is therefore 73,33% (I am excluding from the count the two AAs that are present in both groups). The “randomized” sequence of 28 AAs is made (in the wildtype) of 16 RNN AAs and 6 YNN. The percentage of YNN AAs is therefore 37,50%. IOWs, the “non primitive” AAs are less represented in the initial sequence of the wildtype that will be randomized later, while they are twice represented in the “conserved” sequence. That, alone, can well explain why the RNN subset works better in the randomization: those AAs are already prevalent in the original non conserved sequence. But not in the conserved sequence. It is perfectly obvious that the YNN library cannot easily “reconstitute” a sequence that is made for two thirds of AAs from the RNN subset. 3)The results: Well, the main result in the paper is that the YNN library fails while the RNN library succeeds. I have already shown why that is rather obvious. And even if there were not the bias in the initial sequence, that simple result would only show that in some way the RNN subset is more “basic” for protein structure than the YNN subset. As the two subsets do not correspond to “primitive” and “non primitive”, it is not possible to extend that simple observation to the concept of “primitivity”. And yet the authors boldlt state that: “Our result experimentally supports the Babajide’s hypothesis [15], for the first time, that modern proteins might be able to be simplified by a set of putative primitive amino acids more easily than by a set of putative new amino acids.” But that is simply not true. Then the authors present another result: “Further, interestingly, the functional SH3 sequences were enriched from the SH3(RNN)28 library slightly earlier than from the SH3(NNN)28 library, while the function and structure of selected SH3(RNN)28 proteins with the primitive alphabet were comparable with those of SH3 domains with the 20 alphabet. The results imply that the protein sequence variety with a limited set of primitive amino acids includes a larger number of functional sequences than that with the current 20 amino acid alphabet.” First of all, that is a very small effect. And we have already observed that it can easily be explained by the unbalanced representation of the two subsets of AAs in the original wildtype sequence. And anyway, the meaning of that finding remains of small relevance and obscure interpretation. The authors, obviously, are very cautious in discussing it. But you are not cautious at all. You wrote that I should “confront the argument in the paper which the authors themselves say supports the hypothesis of a gradual evolution of the genetic code” I have not found such a statement in the discussion (if I have missed it, please let me know). So, I will discuss it as “your statement”. And it is really an unwarranted statemet (I am being very generous here). You are saying that a small effect on the number of functional sequences in this context, for a small scarcely functional sequence of 28 positions of a small protein domain, a sequence that is however unbalanced in the beginning in respect to the sets of AAs tested, and regarding two sets of AAs that do not really correspond to primitive and non primitive AAs, but are only differently rich in them, you are saying that such an observation of a slight effect can be generalized to the general proteome, and that it supports very fundamental choices in general coceptual models that are in no way proven by anything else? Is that what you are saying? And from that you derive also a support for “the gradual evolution of the genetic code”? For heaven’s sake, I have never seen such a giant epistemological leap in all my scientific career! So, let’s sum up: the paper is mostly irrelevant, somewhat biased in its premises, procedure, and epistemological analysis. And, obviously, in no way it “supports the hypothesis of a gradual evolution of the genetic code”. Ah, I forgot your strange statement: “Axe randomized a large reading frame and the function was lost, so I would think this is an important result” But what do you mean here? In this experiment, the function is lost too. It is maintained in rare sequences only because the most important 29 positions were kept. That is so obvious that even a child would understand it. gpuccio
Petrushka, You have made this bizarre argument dozens of times.
The claim was made that a designer steers change based on utility. that Implies a near omniscient knowledge of both the biochemical consequences of each and every sequence, and the utility of each possible sequence. If you have an alternative way of knowing how to steer change toward utility, feel free to provide an example.
That is not how anyone designs anything ever. You somehow typed your post without an awareness of every imaginable configuration of characters or words. I write software and never take into account the gazillions of potential arrangements of commands, variables, objects, classes, and methods. Intelligence is exactly what negates that need. You're navigating from a leap of faith to a bad conclusion. First you assume that biological configurations were achieved by an algorithmic search. (This is baseless.) With that assumption as a basis, you assert that an intelligent agent could only design biological configurations by that same method. Countless functional arrangements produced by intelligent reasoning start at the end and reasons backwards, start with components and imagine possibilities, or work both ends and the middle at once, often in collaboration. I can write a program built from components that don't exist as someone else creates them, or even before they do. These examples are real. All of them are real. Your examples of functional results achieved by algorithms are all either imaginary, limited in functionality, or so dependent on intelligent reasoning that they aren't examples at all. One is real, one is imaginary. Why would we discard what we observe and base our understanding of biology or anything else on an imagination? The intelligence required to even grasp the problem, understand the components and the end result, and even partially imagine what you are partially imagining should tell you something. ScottAndrews2
I gave an example of a landscape which can be traversed incrementally. The travelling salesman problem. It requires no oracle, simply a comparison among current population members as to which is the shortest route. There is no need to know the correct answer. It fact with a sufficient large number of stop it is impossible to know what the correct answer is. And yes, I assume that genomes constitute a landscape traversable by incremental steps. This is an empirical question being addressed by people like Lenski and Thornton. It's not a question that can be decided by thinking about it. You have to do the research. Variation and selection is a GA. It is the prototype from which all such computer programs are derived. Given variation and fecundity, it is automatic. It requires no programming. Petrushka
Starbuck As to:
supports the hypothesis of a gradual evolution of the genetic code.
And yet:
Shannon Information – Channel Capacity – Perry Marshall – video http://www.metacafe.com/watch/5457552/ “Because of Shannon channel capacity that previous (first) codon alphabet had to be at least as complex as the current codon alphabet (DNA code), otherwise transferring the information from the simpler alphabet into the current alphabet would have been mathematically impossible” Donald E. Johnson – Bioinformatics: The Information in Life
Perhaps in your imagination you can envision no difficulty in changing a code, maybe the following will help:
Venter vs. Dawkins on the Tree of Life – and Another Dawkins Whopper – March 2011 Excerpt:,,, But first, let’s look at the reason Dawkins gives for why the code must be universal: “The reason is interesting. Any mutation in the genetic code itself (as opposed to mutations in the genes that it encodes) would have an instantly catastrophic effect, not just in one place but throughout the whole organism. If any word in the 64-word dictionary changed its meaning, so that it came to specify a different amino acid, just about every protein in the body would instantaneously change, probably in many places along its length. Unlike an ordinary mutation…this would spell disaster.” (2009, p. 409-10) OK. Keep Dawkins’ claim of universality in mind, along with his argument for why the code must be universal, and then go here (linked site listing 23 variants of the genetic code).
Now Starbuck, to realistically hold that this 'hypothesis of a gradual evolution of the genetic code' is even plausible in the 'real world', in order to overturn a principle that the entire telecommunications industry is based upon (Shannon Channel Capacity), you are going to have to present far more substantial evidence than what you have. Perhaps actually evolving a superior code from a simpler code instead of just working within the already existing code??? Then you would have my full attention rather than have me questioning your impartial objectivity. bornagain77
F/N 2: P, kindly, lose the strawman about "reducing" biochemistry to computer programs. That is outright disrespectful The issue is that there is an ADDITIONAL, purposeful structure on top of the chemistry and thermodynamics, the creation and use of algorithm-implementing molecular machinery that USES the chemistry and physics of the atoms and energetics involved, but goes beyond it to a new domain. Foundations are important, but they are not the main focus when we build a house. Nor is the foundation a sufficient explanation for the house. It is a necessary (and expensive) undergirding support. it constrains but it does not deliver final functionality, that is coming from somewhere else. Nor do we explain the core structures of the house on a tornado passing through a hardware shop. Some very elaborate, energy-expensive procedures are involved in the molecular biology of the cell --
for both metabolic processes and the vNSR self replication facility [which last is a lot more complex than a house or a Jumbo jet . . . cf. the difference in difficulties across the past 60 years in how we can make Jumbo jets, but only simulate or sketch full bore vNSRs . . . or, latterly, get a unit that makes parts for us to assemble by hand (the cell makes parts and auto self-assembles) -- think of a self-assembling jumbo jet] --
. . . even the case of the ATP energy battery molecule that enables all of this to happen is a case in point. Something is going out of the way to do something that would not normally happen in any reasonable time, and is using this to manufacture clusters of macromolecules based on stored information, that work together in complex, integrated ways to give us metabolic and self-replicating behaviour of the cell. And the chicken-egg dilemma of the fact that the crucial enzymes and ATP energy batteries for the process are themselves in precisely this case, points strongly to a system of artifices to give rise to systems for a purpose. That sounds a lot like an architect, one who is using the peculiar properties and abundances of the first four most abundant atoms in our cosmos, which is fine tuned to deliver these atoms -- think about water and the aqueous medium Carbon chemistry that underlie these biomolecules on the Phys-Chem side. That highlights Hoyle's remark on monkeying with the physics of the cosmos, and the overall pattern suggests strongly that we are seeing an architecture of a cosmos that was purposefully built to be inhabited. Multiply by the digital -- yes, digital -- code, algorithms and clever use of the molecules in a key-lock fit based design. (Remember too that the AA code turns out to be pretty much optimal, one in a million of possibilities, i.e in the far, far, far skirt.) There is plain and strong evidence of design everywhere we look, and we know separately that the root of the tendency not to see it is a philosophical prejudice imported into science. So, it is time to tear off the a priori materialism straightjacket and blinkers. GEM of TKI kairosfocus
F/N: I see the strawman of Paley's watch in the field -- predictably, and in the teeth of repeated corrections across time -- has been duly set up and knocked over. I again point Petrushka to the case in Chapter II of Paley's work (on the table since 1806, but routinely passed over in the haste to try to suggest that mere blind reproduction and variability can work magic), the timekeeping watch with the ADDITIONAL capacity of self-replication. Paley, for good reason [cf the linked], shows that that such additionality INCREASES the reasons to infer to design of core functional capacity. You have to put together a considerable -- indeed, challenging -- quantum of core integrated function to get a metabolising entity that has the additional capacity of symbol and algorithm based von Neuman replicator self-replication [vNSR]. Which embeds considerable FSCI, i.e. we are right back at the challenge that P tried to dodge by suggesting that the magic of replication within an island of specific function (and relatively minor adaptation as a part of that), removes the challenge of arriving at the shores of such islands of specific function in vast config spaces too large for blind trial and error on the gamut of our solar system or observed cosmos. Of course, the very protein domains P tried to brush aside earlier are strong evidence of the islands of function [per Axe, 1 in 10^70 of a space is practically infinitesimal]. And, the examples of the FSCI we see all around us (including in this thread) that P tried to divert from show just how these are produced without having to store huge config spaces somewhere: insightful knowledge, skill, imagination and purpose of intelligent agents -- the very case of the intelligent supervisory controller that P earlier thought was just an unnecessary addition to a pretty diagram, the Smith Model of MIMO cybernetic systems. (Of course, intelligent behaviour can be canned in an expert system type program or the like, which is of course yet another case in point of the known source of FSCI.) I repeat, GA's are intelligently designed, use very tightly controlled chance variation, depend on an intelligently developed fitness function environment for the island they move around on, and thus start within target zones and move around within same. Think about how it took engineering intervention after 5,000 years of rose breeding to get us close to a genuinely blue rose. (Still too lavender for my taste.) That hurdle for a very minor variation should tell us something about the plasticity challenge faced by variable genomes. GEM of TKI PS: MI, good job. kairosfocus
"There are landscapes for which the only good known method of traversal is via genetic algorithms. Landscapes for which there are no shortcuts via formulas."
I missed the part where you explain how "evolution" which produces variations that are blind with respect to outcome, traverses sequence spaces on fitness landscapes, "for which there are no shortcuts via formulas" by way of genetic algorithms, which are known to be the product of intelligent design. There is nothing purely random about GAs -- which can potentially cull vast sequence spaces via search constraints, to produce results within a specified realm of function -- an observable result of intelligent design. If you're asserting that a wide swath of integrated functions exist on a landscape that is traversable via random processes, given the known probabilistic resources of the universe, you've yet to demonstrate that such landscapes exist, or such function sequences, except by begging the question. And if your only escape is to invoke GAs, then those would also need to be explicable in terms of variation and selection. Arriving at a mechanism which searches for a function is no less sophisticated than the function being searched for. material.infantacy
Speaking of the finitely traversable — I suppose you claim that it’s impossible for intelligence to accomplish that which “evolution” accomplishes.
There are landscapes for which the only good known method of traversal is via genetic algorithms. Landscapes for which there are no shortcuts via formulas. Douglas Axe has said the protein landscape has no shortcuts. He may or may not be correct. Feel free to describe a formula for deriving the utility of coding sequences without cut and try. Petrushka
This is, of course, an untenable position. For if evolution can find functional sequences based on trial and error, it means that the functions are discoverable by trial and error. It follows that, not only could intelligence do the same, but it could accomplish the same in far less time, as agents are not limited to random trials which are blind with respect to outcome.
It depends on the landscape. For many landscapes, such as the travelling salesman problem, genetic algorithms are the only known method for traversing toward a good solution. You might say that such a landscape is discoverble by trial and error, but you would also have to say that such a landscape is traversable incrementally. Such a landscape has no known shortcuts, no formulaic method of solution. It's incremental steps or random search. The two are not the same. The protein landscape may or may not be traversable by incremental steps. We know that current proteins occupy islands, but that is irrelevant. What would be relevant is whether their history involves island hopping, and we do not have the history. But we can observe that regulatory and developmental networks are amenable to incremental change. Humans have been doing directed evolution for thousands of years. We also know that very little evolution since the Cambrian requires the invention of new proteins. Most of what the layman thinks of as evolution -- the divergence of vertebrates -- is "microevolution." The terms knowledge and database are a distinction without a difference. A finite being must store the knowledge in some physical medium, whether it be a brain or a computer database. The physical limits apply to either. So yes, it is impossible for a finite being, one limited by the resources of the physical universe, to anticipate the effects of sequence changes and steer change toward utility. This comports with numerous observations of genomic variation, which support the claim that variation is blind with respect to utility. Petrushka
"The claim was made that a designer steers change based on utility. that Implies a near omniscient knowledge of both the biochemical consequences of each and every sequence, and the utility of each possible sequence."
But if a designer has near omniscient knowledge of biochemical outcomes for various functional protein sequences, why would he need a database of those same sequences? It's one or the other. Either the designer needs to store an arbitrary set of functional sequences in a database somewhere, or he can determine desired sequences with respect to function, by possessing requisite knowledge of physics and chemistry. Would you claim that it's impossible for a human civilization to ever obtain collective knowledge of physics and chemistry enough to make determinations (or even estimations) of protein functions based on the polypeptide sequences? I take your position to be roughly thus: It's impossible for a designer to design biological systems, therefore nature did it. This is, of course, an untenable position. For if evolution can find functional sequences based on trial and error, it means that the functions are discoverable by trial and error. It follows that, not only could intelligence do the same, but it could accomplish the same in far less time, as agents are not limited to random trials which are blind with respect to outcome. material.infantacy
gpuccio I never said anything about obtaining a protein from scratch, that doesn't even make sense. How could you possibly interpret anything I said in this thread in that manner? The bottom line is that the experiment shows that one can obtain a functional protein, faster, from a limited set of amino acids, from a randomized domain, then one could from a randomized domain with the full set of amino acids. Again, do you have a response to this? You have yet to do so. Axe randomized a large reading frame and the function was lost, so I would think this is an important result. I'm off tomorrow but will be back Monday to see if you actually are able to confront the argument in the paper which the authors themselves say supports the hypothesis of a gradual evolution of the genetic code. Starbuck
But you must admit that it's a far more likely scenario that protein domains were poofed into existence fully formed from the head of Zeus. Pay no attention to that drift behind the curtain. The fact that one can easily observe small changes, but no one has ever witnessed an obvious design event makes no difference. Petrushka
Does this imply that an agent would need to traverse all of “protein space” in order to determine the 10^500 functional sequences in the first place?
The claim was made that a designer steers change based on utility. that Implies a near omniscient knowledge of both the biochemical consequences of each and every sequence, and the utility of each possible sequence. If you have an alternative way of knowing how to steer change toward utility, feel free to provide an example. Petrushka
Starbuck: I don't want to nitpick your wording at all. Your original post was just one single phrase, and there is nothing to nitpick in it. It contained two wrong statements, and nothing else. I was just trying to explain what the paper really said, and to correct the wrong implications you were deriving from it. You cannot deny that, reading your phrase, anyone could think that people were obtaining proteins from scratch using only primitive aminoacids. That is not true. Period. I am not ignoring the differences observed in the paper. I just say that it is very difficult to draw conclusions from them, and that the authors' conclusions, even if cautious, are scarcely warranted. The fact that the random substitutions were effected only in the least functionally relevants parts of a very short domain makes it impossible to really draw general conclusions from those observations. The parts randomized are probably not involved directly in the basic folding and active site. The two sets of substitutions only in part correspond to the concept of "primitive" or "non primitive" aminoacids. They can be interpreted in many other ways. The big difference was between the two sets of partial substitutions, ans it can be due to many effects, even related to single aminoacids in this particular sequence (don't forget that only 28 sites were mutated). The difference between the nehaviour of the "primitive" set and the "complete" set, instead, was minimal. So, the paper is interesting (and, above all, it shows clearly how rare funcional sequences are, even when we already have the strongly funcional part already fixed, and the sequence is very short!). But it is certainly very "stretched" in its conclusions. And those conclusions are well different form what your original post suggested. As you can see, I do have all the responses you want. My original point was only to correct your wtong post (that you have not admitted was wrong). But I have no problem in discussing all the aspects of the paper. I am happy that you are "sympathethic" with the idea that the first cells were designed. For me, I am sure of that. I am also sure that "a naturalistic explanation is impossible" (if, with "naturalistic", you mean a "non design" explanation). And frankly, I don't understand (or I prefer not to understand) your strange reference to "hidden agendas". What hidden agendas? I have none. I have my ideas, I express them very clearly here, and I take full responsibility for the. I don't understand what you mean by "agenda", and I can't see what is "hidden". Can you please explain? gpuccio
The hypothesis is that large & complex proteins were built from the bottom up, i.e. from smaller folding units, over eons of evolutionary history. A nice study that speaks to this issue was reported by Lang et al. Science 2000, 289, 1546. The authors provide structural evidence the (beta/alpha)8-barrel proteins, a widespread and complex fold, arose by a process of gene duplication and fusion. Also, one has to distinguish between obtaining a specific folded structure and any possible structure. Studies of random sequence polymers indicate that folded structures are actually surprisingly common. See, for example, Davidson & Sauer Proc. Natl. Acad. Sci. USA 1994, 91, 2146. This issue is also discussed in the attached review. Starbuck
correction: no matter when one should be naive enough to proclaim ‘this is how Great God is’!; bornagain77
Well, in reality it is found that infinite knowledge (omniscience) and infinite power (omnipotence) are required to cause the collapse of a single photon from its quantum wave state to its particle state:
Quantum Theory’s ‘Wavefunction’ Found to Be Real Physical Entity https://uncommondescent.com/cosmology/%E2%80%9Cseismic%E2%80%9D-new-paper-on-quantum-mechanics/comment-page-1/#comment-409673
Considering that the quantum waves of photons are collapsing to each unique point of 'central observation' in the universe, that pretty much settles the case for me that omniscient, omnipotent, God created, and is sustaining, the universe. But even creating and sustaining the universe, as awesome as that is for us to consider, fails to capture the essence of just how Great our God truly is (as if our finite minds could ever truly grasp that infinite greatness). In fact it is impossible for us to measure God's Greatness for His Greatness is infinitely unlimited, and thus, just when one thinks they may have a handle on just how Great God is, His greatness, of logical necessity, will infinitely exceed even that measure, no matter when one should be naive enough to 'this is how great God is'!;
Hillsong Live - Greatness of Our God - Music Videos http://www.godtube.com/watch/?v=7GDKGLNX
bornagain77
gpuccio, you can nitpick my wording all you want, but the fact remains that the paper shows that functional proteins were obtained faster with a limited set of primitive amino acids than were obtained with a randomized library with all 20 amino acids. Why are you ignoring this important result? What is your response? Do you have none? I'm sympathetic with the idea that the first cells were designed I'm NOT sympathetic with the idea of falsely claiming that a naturalistic explanation is impossible because of hidden agendas. Starbuck
No, it's more of a canceling effect Joseph
"a hyperskeptical person might be pardoned for thinking it requires omniscience."
To whom are you referring? The claim of requiring omniscience appears to be yours. ID proponents claim that biological systems engineering requires intelligence, not omniscience. It would be absurd to require omniscience for something which is finitely traversable. Speaking of the finitely traversable -- I suppose you claim that it's impossible for intelligence to accomplish that which "evolution" accomplishes. That is, while evolution can discover functional protein sequences by trial and error, such would be impossible for an intelligent agent. Is that the case? material.infantacy
as to: a hyperskeptical person might be pardoned for thinking it requires omniscience. color me hyperskeptical! bornagain77
That should read: You are correct. Emergence is a layered phenomenon. One cannot predict the behavior of molecules from the attributes of elementary particles. One cannot predict the behavior of complex mechanisms from the attributes of molecules. Etc. Petrushka
"...but the designer must have certain minimum capabilities. among them would be a database of sequences and their attributes."
I don't want to misunderstand your claim. Are you saying that if there are 10^500 functional sequences in "protein space" that these sequences would need to be stored in a database somewhere? Does this imply that an agent would need to traverse all of "protein space" in order to determine the 10^500 functional sequences in the first place? material.infantacy
Yet Chemistry alone does not explain everything that is happening in the cell (emergent chemistry or no emergent chemistry):
You are correct. Emergence is a layered phenomenon. One cannot predict the behavior of molecules from the attributes of particle. One cannot predict the behavior of mechanisms of molecules. Etc. The designer has a hefty task predicting the utility of a minor coding sequence change, considering not just biochemistry, but also the needs of cells, whole organism and populations competing against each other in a changing physical environment. a hyperskeptical person might be pardoned for thinking it requires omniscience. Petrushka
Kindly explain to me what this thread is made of if not instances of FSCI.
What this thread is not made of is living, reproducing organisms. Kindly explain why you speak against reductionism, but wish -- when convenient -- to reduce biochemistry to computer programs. Computer programs can easily model population genetics, the diffusion of alleles in a population, but they cannot yet model biochemistry with enough precision to predict emergent properties of sequences and molecules. That's one of the features of emergence. Do you guys ever read the stuff that News is promoting? I can live with not knowing the identity of the designer, but the designer must have certain minimum capabilities. among them would be a database of sequences and their attributes. Since the number of possible sequences exceeds the number of particles in the universe, I am skeptical that such a database exists. It would be a good avenue of research for an ID program to demonstrate the feasibility of anticipating the utility of a designed sequence. Petrushka
...how would you store the 10^500 profiles for each sequence, assuming that’s a realistic number of permutations?
Where do you store the profiles for the meaningful permutations of a 100 word paragraph given a 10,000 word vocabulary? They exist in a sequence space of (10^4)^(10^2) or 10^400. Even a geometric fourth of that number is 10^100 (an estimate of meaningful permutations) which far exceeds the count of elementary particles in the universe. The profiles would only need to be stored if they are arbitrary. Rather, function is determined by a knowledge of cause and effect, given a requisite knowledge of physics and chemistry. The notion that functional sequences need to be "stored" somewhere appears to come from an assumption that the sequences are arbitrary, and that the entire sequence space would need to be traversed before any knowledge of function could be acquired. material.infantacy
Petrushka: Kindly explain to me what this thread is made of if not instances of FSCI. Tell me what is responsible for it. Then tell me something else that has ever been observed originating at least 500 bits of FSCI, without involvement of a designer, and/or incremental minor variations of an already set up functioning complex design. Then, tell me why I should not conclude that you are not in denial in the teeth of evident facts you know or should know. And BTW, when humans by intelligent artificial selection push genomes to functional limits, cutting out the variability that promotes robustness, and/or select small mutations that would be non-viable in the wild, they are participating in an intelligently directed process. One that has a strong tendency to run into pretty firm limits. Have you recently seen a naturally bred blue -- really blue -- rose? Wiki:
Blue roses, often portrayed in literature and art as a symbol of love and prosperity to those who seek it, do not exist in nature as a result of genetic limitations being imposed upon natural variance. Traditionally, white roses have been dyed blue to produce a blue appearance. In 2004, researchers used genetic modification to create blue pigmented roses. A blue rose is traditionally a flower of the genus Rosa (family Rosaceae) that presents blue-to-violet pigmentation and also the Morganus Clarke sunflower seed disposition, instead of the more common red or white variety . . . . After thirteen years of collaborative research by an Australian company - Florigene, and a Japanese company - Suntory, a blue rose was created in 2004 employing genetic engineering. Years of research resulted in the ability to insert a gene for the plant pigment delphinidin cloned from the petunia and thus inserted into an Old Garden Cardinal de Richelieu rose. Obtaining the exact hue was difficult because amounts of the pigment cyanidin were still present, so the rose was darker in color than "true blue".[4] Recent work using RNAi technology to depress the production of cyanidin has produced a mauve colored flower, with only trace amounts of cyanidin.[5][6] Genetically modified blue roses are currently being grown in test batches at the Martino Cassanova seed institution in South Hampshire due to its condititons and ability to survive in nature, according to company spokesman Atsuhito Osaka.
GEM of TKI kairosfocus
Let's try not to be hypocritical. Evolution has been observed. Directed evolution has been an industry for thousands of years. And yet ID advocates demand that every evolution event be documented before admitting that an observed phenomenon cn be extrapolated. ID has zero observed event, zero candidates for the designer, zero attributes for the designer, a set of knowledge requirements that exceed the particle count of the universe, and they expect to be taken seriously. How about demonstrating that design is even possible without evolution or directed evolution before making claims that a fantasy agency is a superior hypothesis. Petrushka
I'm glad to hear that rational design is on the table. Perhaps you can provide a thought experiment in which you anticipate the change in utility brought about by a code change. Has anyone even projected that this is possible without directed evolution? Assuming you could do so, how would you store the 10^500 profiles for each sequence, assuming that's a realistic number of permutations? I think you need to demonstrate feasibility. Petrushka
Starbuck:
The Finely Tuned Genetic Code - Jonathan M. - November 2011 Excerpt: Nature's Alphabet is Non-Random,,, ,,, the researchers also go after the eight prebiotically plausible amino acids that are found among the 20 that are currently exhibited in biological proteins. They compared the properties of these amino acids with alternative sets of eight drawn randomly, establishing -- once again -- the fundamentally non-random nature of those utilized.,,, ,,,For a thorough discursive review of various attempts at explaining code evolution, I refer readers to this 2009 paper by Eugene Koonin and Artem Novozhilov. They conclude their critical review by lamenting that, In our opinion, despite extensive and, in many cases, elaborate attempts to model code optimization, ingenious theorizing along the lines of the coevolution theory, and considerable experimentation, very little definitive progress has been made. They further report, Summarizing the state of the art in the study of the code evolution, we cannot escape considerable skepticism. It seems that the two-pronged fundamental question: "why is the genetic code the way it is and how did it come to be?," that was asked over 50 years ago, at the dawn of molecular biology, might remain pertinent even in another 50 years. Our consolation is that we cannot think of a more fundamental problem in biology. Nonetheless, even if we grant the premise that the genetic code can be modified over time, it still remains to be determined whether there are sufficient probabilistic resources at hand to justify appeals to the workings of chance and necessity. In view of the sheer number of codes that would need to be sampled and evaluated, evolutionary scenarios seem quite unlikely. Doing the Math Hubert Yockey, a biophysicist and information theorist, has argued that the number of potential genetic codes is of the order of 1.40 x 10^70. Yockey concedes the extremely conservative figure of 6.3 x 10^15 seconds for the time available for the genetic code to evolve. Note that this assumes that the genetic code has been evolving since the Big Bang. So, how many codes per second would be required to be evaluated in order for natural selection to "stumble upon" the universal genetic code found in nature? The math works out to roughly 10^55 codes per second.,,, http://www.evolutionnews.org/2011/11/the_finely_tuned_genetic_code052611.html
bornagain77
Mine is in transit Upright BiPed
"some vague notion of information" LOL. That, by itself, was worth loggin on for. Upright BiPed
Starbuck: I have corrected the obviouys imprecisions in your post. The libraries that were built are not random libraries. A random library is a library were the sequence are completely random. Here, as you yourself admit, they have "left the conserved regions intact", IOWs, they have randomly changed (with different AA sets) 28 out of 57 AAs in the sequence, leaving intact the 29 that are obviously the most important for the function. Even so, they found a very low number of functional sequences. In the paper they call these libraries "randomized SH3 gene libraries", which is certainly not the same as "random libraries". In your post you boldly stated that: "People can already make a protein with only a small set of ancient amino acids" That is completely false. The functional protein here is made mainly of the 29 AAs whicvh give the main contribution to the function (that's why they are conserved). The only thing they made was randomly substitute the other 28 positions. And then look for the rare sequences where the function could be retrieved. So, it is perfectly true that your statement was completely wrong: a) They did not "make" any protein b) The functional protein in no way was made "with only a small set of ancient amino acids" It is true that they observed some differences in the behavious of the randomized SH3 gene libraries according to the subset of substitutions used, but that is not what you had stated. And what they observed can be obviously intepreted in many different ways (and the authors are well aware of that, as can be seen in the discussion). So, you could at least admit that your post was wrong and misleading, just as an act of intellectual honesty. If you want, obviously. By the way, by "darwinists" I simply mean those who adhere to the neo darwinian model (the modern synthesis, if you want), or some variation of it, and defend it. If you are not a darwinist, I apologize. But I do think what I wrote of "darwinists" in general (with some due exceptions). gpuccio
Do positive information and negative information attract each other? Mung
Petrushka you state:
'Ignoring that rather obvious fact that emergent phenomena like protein folding are not amenable to prediction by computer, but are accomplished easily by chemistry.
Yet Chemistry alone does not explain everything that is happening in the cell (emergent chemistry or no emergent chemistry):
Does DNA Have Telepathic Properties?-A Galaxy Insight - 2009 Excerpt: The recognition of similar sequences in DNA’s chemical subunits, occurs in a way unrecognized by science. There is no known reason why the DNA is able to combine the way it does, and from a current theoretical standpoint this feat should be chemically impossible. http://www.dailygalaxy.com/my_weblog/2009/04/does-dna-have-t.html DNA Can Discern Between Two Quantum States, Research Shows – June 2011 Excerpt: — DNA — can discern between quantum states known as spin. – The researchers fabricated self-assembling, single layers of DNA attached to a gold substrate. They then exposed the DNA to mixed groups of electrons with both directions of spin. Indeed, the team’s results surpassed expectations: The biological molecules reacted strongly with the electrons carrying one of those spins, and hardly at all with the others. The longer the molecule, the more efficient it was at choosing electrons with the desired spin, while single strands and damaged bits of DNA did not exhibit this property. http://www.sciencedaily.com/releases/2011/03/110331104014.htm Coherent Intrachain energy migration at room temperature - Elisabetta Collini & Gregory Scholes - University of Toronto - Science, 323, (2009), pp. 369-73 Excerpt: The authors conducted an experiment to observe quantum coherence dynamics in relation to energy transfer. The experiment, conducted at room temperature, examined chain conformations, such as those found in the proteins of living cells. Neighbouring molecules along the backbone of a protein chain were seen to have coherent energy transfer. Where this happens quantum decoherence (the underlying tendency to loss of coherence due to interaction with the environment) is able to be resisted, and the evolution of the system remains entangled as a single quantum state. http://www.scimednet.org/quantum-coherence-living-cells-and-protein/ Quantum states in proteins and protein assemblies: The essence of life? - STUART HAMEROFF, JACK TUSZYNSKI Excerpt: It is, in fact, the hydrophobic effect and attractions among non-polar hydrophobic groups by van der Waals forces which drive protein folding. Although the confluence of hydrophobic side groups are small, roughly 1/30 to 1/250 of protein volumes, they exert enormous influence in the regulation of protein dynamics and function. Several hydrophobic pockets may work cooperatively in a single protein (Figure 2, Left). Hydrophobic pockets may be considered the “brain” or nervous system of each protein.,,, Proteins, lipids and nucleic acids are composed of constituent molecules which have both non-polar and polar regions on opposite ends. In an aqueous medium the non-polar regions of any of these components will join together to form hydrophobic regions where quantum forces reign. http://www.tony5m17h.net/SHJTQprotein.pdf
Here's another measure for quantum information in protein structures:
Proteins with cruise control provide new perspective: Excerpt: “A mathematical analysis of the experiments showed that the proteins themselves acted to correct any imbalance imposed on them through artificial mutations and restored the chain to working order.” http://www.princeton.edu/main/news/archive/S22/60/95O56/
Petrushka, in case you don't know, quantum entanglement/information is not reducible to a 'emergent' chemical basis of the material particles in a cell. In fact finding quantum entanglement/information in molecular biology falsifies the materialistic basis that neo-Darwinism is built upon in the first place.
Falsification Of Neo-Darwinism by Quantum Entanglement/Information https://docs.google.com/document/d/1p8AQgqFqiRQwyaF8t1_CKTPQ9duN8FHU9-pV4oBDOVs/edit?hl=en_US Myosin Coherence Excerpt: Quantum physics and molecular biology are two disciplines that have evolved relatively independently. However, recently a wealth of evidence has demonstrated the importance of quantum mechanics for biological systems and thus a new field of quantum biology is emerging. Living systems have mastered the making and breaking of chemical bonds, which are quantum mechanical phenomena. Absorbance of frequency specific radiation (e.g. photosynthesis and vision), conversion of chemical energy into mechanical motion (e.g. ATP cleavage) and single electron transfers through biological polymers (e.g. DNA or proteins) are all quantum mechanical effects. http://www.energetic-medicine.net/bioenergetic-articles/articles/63/1/Myosin-Coherence/Page1.html
further notes:
“Seismic” new paper on quantum mechanics? - November 2011 Excerpt: “The quantum state cannot be interpreted statistically” https://uncommondescent.com/cosmology/%E2%80%9Cseismic%E2%80%9D-new-paper-on-quantum-mechanics/
It is important to note that the following experiment actually proved that information can be encoded into a photon while it is in its quantum wave state, thus destroying the notion, that was/is held by many, that the wave function was not ‘physically real’ but was merely ‘abstract’. i.e. How can information possibly be encoded into a entity that is not physically real but is merely abstract? It simply would not be possible!
Ultra-Dense Optical Storage – on One Photon Excerpt: Researchers at the University of Rochester have made an optics breakthrough that allows them to encode an entire image’s worth of data into a photon, slow the image down for storage, and then retrieve the image intact.,,, Quantum mechanics dictates some strange things at that scale, so that bit of light could be thought of as both a particle and a wave. As a wave, it passed through all parts of the stencil at once, carrying the "shadow" of the UR with it. http://www.physorg.com/news88439430.html
bornagain77
Actually structure/function from the two libraries were comparable with eachother. Starbuck
gpuccio, I'm not sure what you mean by "Darwinist", but they did not "substitute" anything, they were working with random libraries. Yes they left the conserved regions in tact, but that doesn't detract from the fact that they obtained functional proteins from these randomized librarires with only a limited set of primitive amino acid even faster than they got functional proteins from a random library with all 20 amino acids. Not only that, but the resulting proteins from the limited set of amino acids were more similar to wild-type sh3 proteins than were the functional proteins that were obtained from the library with all 20 amino acids. It would be interesting to see both active site and frame regions wholly randomized but already from this study we can see that with a limited set of primitive amino acids, you get functional proteins more easily. Starbuck
From Wikipedia: "Protein engineering is the process of developing useful or valuable proteins. It is a young discipline, with much research taking place into the understanding of protein folding and recognition for protein design principles. There are two general strategies for protein engineering, rational design and directed evolution. These techniques are not mutually exclusive; researchers will often apply both. In the future, more detailed knowledge of protein structure and function, as well as advancements in high-throughput technology, may greatly expand the capabilities of protein engineering." gpuccio
P: The first problem here is your obvious reservation, an imagined for the sake of argument "concession." DNA implements a four-state, discrete-state (i.e, digital) code, where three letters in succession give a codon for adding an AA or for start/stop. If you cannot see or understand this coming out the starting gate, it speaks volumes about a failure to accept patent facts. Pardon, here is Wiki:
Deoxyribonucleic acid (/di??ksi?ra?b?.nju??kle?.?k ?æs?d/ ( listen); DNA) is a nucleic acid that contains the genetic instructions used in the development and functioning of all known living organisms (with the exception of RNA viruses). The DNA segments that carry this genetic information are called genes, but other DNA sequences have structural purposes, or are involved in regulating the use of this genetic information. Along with RNA and proteins, DNA is one of the three major macromolecules that are essential for all known forms of life. DNA consists of two long polymers of simple units called nucleotides, with backbones made of sugars and phosphate groups joined by ester bonds. These two strands run in opposite directions to each other and are therefore anti-parallel. Attached to each sugar is one of four types of molecules called nucleobases (informally, bases). It is the sequence of these four nucleobases along the backbone that encodes information. This information is read using the genetic code, which specifies the sequence of the amino acids within proteins. The code is read by copying stretches of DNA into the related nucleic acid RNA in a process called transcription . . .
Cf as well the audio/slides here. Now, you proceed to impose a difficult problem on top of a simpler one as though one has to solve the second to soundly answer the first. This is distractive and not reasonable. Indeed it is blatantly hyperskeptical and implies an infinite regress: until you present solutions to all successive problems I dream up, I will not accept the solution to the problem before us. By that criterion, Kepler's analysis was unacceptable as he did not have a dynamical explanation. Newton's was unacceptable because he had no grounding of his hypothesised force, and Einstein's is the first acceptable answer. But without Kepler and Newton we would never have had Einstein. We know we are looking at code, and that it is demonstrably functionally specific. That is enough to show that we are looking at cases observed E's from zones of specific function T's in config spaces W's that are well beyond the chance plus necessity search capability of the observed cosmos. The only empirically known cause of such is design. GEM of TKI kairosfocus
From Wikipedia: "Drug design, also sometimes referred to as rational drug design or structure based drug design, is the inventive process of finding new medications based on the knowledge of the biological target.[1] The drug is most commonly an organic small molecule which activates or inhibits the function of a biomolecule such as a protein which in turn results in a therapeutic benefit to the patient. In the most basic sense, drug design involves design of small molecules that are complementary in shape and charge to the biomolecular target to which they interact and therefore will bind to it. Drug design frequently but not necessarily relies on computer modeling techniques.[2] This type of modeling is often referred to as computer-aided drug design." gpuccio
Petrushka:
I find it interesting that ID advocates accuse the mainstream of reductionism, then proceed to reduce life to some vague notion of information.
Information is just part of a living organism. meaning ID does not attempt to reduce living organisms to just information. Not only that "information" is not a vague notion- civilization could not live without it.
Ignoring that rather obvious fact that emergent phenomena like protein folding are not amenable to prediction by computer, but are accomplished easily by chemistry.
Is that why chaperones are required to properly fold long AA chains?
Chemistry does things that are unpredictable and therefore not reducible to information.
Information is what allows for the atoms that chemistry requires. Joseph
If I grant that DNA is a digital code, explain how one anticipates the utility of changes in the code. Give me a detailed, worked out illustration of how one anticipates the total net utility of a change equivalent to a point mutation, or any of the other couple dozen kinda of mutations and chromosomal rearrangements. Petrushka
Steering toward utility is an interesting concept. Give me an example from the world of biochemistry. A drug, for example, whose utility was determined from first principles without producing thousands or millions of candidates and sieving them. Show me how one anticipates emergent properties. Petrushka
KF: Well, when Petrushka loses all rationality, it's always a good sign (for us) :) gpuccio
YUP! kairosfocus
Petrushka Please read again: Abel et al are arguing here that especially functional prescriptive information and agent cause by choice contingency need to be ADDED to our key categories for analysing the world of life. The opposite of reducitonism. KF kairosfocus
KF, I believe we are going love this book :) gpuccio
I find it interesting that ID advocates accuse the mainstream of reductionism, then proceed to reduce life to some vague notion of information. Ignoring that rather obvious fact that emergent phenomena like protein folding are not amenable to prediction by computer, but are accomplished easily by chemistry. Just about everything in biochemistry fits this pattern. Chemistry does things that are unpredictable and therefore not reducible to information. At least not information as used by engineers when designing. Petrushka
Johnson, too . . . kairosfocus
H'mm: So does Casey Luskin in his review: _____________ >> Formal control is a major theme of this book, where "uncoerced choices" (p. 4) are used to actualize functional goals that fit into abstract categories. This Platonic idea finds support in mathematics, language, and computation, where non-physical entities exist and have meaning apart from their physical form. Thus, Abel explains that "None of these formalisms can be encompassed by a consistently held naturalistic worldview that seeks to reduce all things to physicodynamics." (p. 5) In Abel's view, such "formalisms depend upon choice contingency rather than chance contingency or necessity." (p. 5) Yet life is built upon formalisms. The First Gene investigates a number of different types of information that we find in nature, including prescriptive information, semantic information, and Shannon information. Prescriptive information is what directs our choices, and it is a form of semantic information -- which is a type of functional information. In contrast, Shannon information, according to Abel, shouldn't even be called "information" because it's really a measure of a reduction in certainty, and by itself cannot do anything to "prescribe or generate formal function." (p. 11) Making arguments similar to those embodied in Dembski's law of conservation of information, Abel argues that "Shannon uncertainty cannot progress to becoming [Functional Information] without smuggling in positive information from an external source." (p. 12) The highest form of information, however, is prescriptive information: "Prescriptive Information is much more than intuitive semantic information. PI requires anticipation, "choice with intent," and the diligent pursuit of Aristotle's 'final function' at successive bona fide decision nodes. PI either instructs or directly produces formal function at its destination through the use of controls, not mere constraints. Once again, PI either tells us what choices to make, or it is a recordation of wise choices already made." (p. 15) In Abel's view, if you're going to explain the origin of prescriptive information, then "Choice Contingency (Selection for potential []not yet existing[] function, not just selection of the best already-existing function) must be included among the fundamental categories of reality along with Chance and Necessity." (p. 25) He further argues, "Chance and necessity cannot generate formal controls. Chance and necessity cannot pursue 'usefulness.'" (p. 263) Moreover: "No physical entity can "self-organize" itself into existence. An effect cannot cause itself. Organization is the effect of choice-contingent determinism, not physicodynamic determinism or chance." (p. 264) So how does prescriptive information arise? Abel explains that "Only agents have been known to write or program meaningful and pragmatic linear digital PI" (p. 40) for "We are hard-pressed to provide empirical evidence, rational justification, or references showing how programming can be accomplished without intentional choices of mind (crossing The Cybernetic Cut)." (p. 78) >> >>[Durston and Chui] "The primary feature of FSC that distinguishes it from RSC and OSC, is the imposition of functional controls upon the sequence." (p. 161) They then measure the FSC for various protein families, showing that functional protein sequences are rare. They believe there is "almost infinitesimal size of functional sequence space relative to the size of the entire sequence space for a given number of sites." (p. 175) >> >> Donald E. Johnson, author of Probability's Nature and Nature's Probability, has a chapter looking at the "minimal replication and control information" required for a protocell. He lists many requirements, such as "A robust information structure that can be self-maintained (including error-correction)" (pp. 414-415) or "Controlled chemical metabolic networks are needed that can selectively admit 'fuel' (redox, heat, photons, etc.) into the cell and process the 'fuel' to harness the energy for growth, reproduction, manufacturing of needed components that can't migrate in, and other useful work." (pp. 413-415) Johnson critiques both the RNA world hypothesis and metabolism-first scenarios for the origin of life. The RNA world hypothesis suffers from the "the infeasibility of forming functional RNA by chance" (p. 405), whereas metabolism-first scenarios cannot achieve life-like replication, and complex chemical catalysts are unlikely to be available on the early earth. The problem, Johnson explains, is that "inanimate nature" cannot "write those programs and operating systems" (pp. 407-408) found in life, and "Coded information has never been observed to originate from physicality." (p. 408)>> CL comments: >>From reading The First Gene, a number of minimal theoretical and material requirements for life emerge: *High levels of prescriptive information *Programming *Symbol systems and language *Molecules which can carry this information and programming *Highly unlikely sequences of functional information *Formal function *An "agent" capable of making "intentional choices of mind" which can "choose" between various options, select for future function, and instantiate these requirements for life . . . . Anti-ID conspiracy theorists love to say that those pesky creationists are always changing their terminology to get around the First Amendment. ID's intellectual pedigree refutes that charge, but The First Gene adds more reasons why that charge should not be taken seriously. The book offers highly technical, strictly scientific arguments about the nature of information, information processing, and biological functionality. Even a cursory read of this book shows that its contributors are just thinking about doing good science. And this science leads them to the conclusion that blind and unguided material causes cannot produce the complexity we observe in life. Some agent capable of making choices is required to produce the first life. >> ______________ Pop, pop, pop, pop, POP, pop, pop, POP, POW . . . GEM of TKI kairosfocus
Of related note: it is now found that the fidelity of the genetic code, of how any particular amino acid is 'spelled in a sequence', is far greater than had at first been thought:
Synonymous Codons: Another Gene Expression Regulation Mechanism - September 2010 Excerpt: There are 64 possible triplet codons in the DNA code, but only 20 amino acids they produce. As one can see, some amino acids can be coded by up to six “synonyms” of triplet codons: e.g., the codes AGA, AGG, CGA, CGC, CGG, and CGU will all yield arginine when translated by the ribosome. If the same amino acid results, what difference could the synonymous codons make? The researchers found that alternate spellings might affect the timing of translation in the ribosome tunnel, and slight delays could influence how the polypeptide begins its folding. This, in turn, might affect what chemical tags get put onto the polypeptide in the post-translational process. In the case of actin, the protein that forms transport highways for muscle and other things, the researchers found that synonymous codons produced very different functional roles for the “isoform” proteins that resulted in non-muscle cells,,, In their conclusion, they repeated, “Whatever the exact mechanism, the discovery of Zhang et al. that synonymous codon changes can so profoundly change the role of a protein adds a new level of complexity to how we interpret the genetic code.”,,, http://www.creationsafaris.com/crev201009.htm#20100919a
Further notes:
DNA - The Genetic Code - Optimal Error Minimization & Parallel Codes - Dr. Fazale Rana - video http://www.metacafe.com/watch/4491422 Nick Lane Takes on the Origin of Life and DNA - Jonathan McLatchie - July 2010 Excerpt: It appears then, that the genetic code has been put together in view of minimizing not just the occurrence of amino acid substitution mutations, but also the detrimental effects that would result when amino acid substitution mutations do occur. http://www.evolutionnews.org/2010/07/nick_lane_and_the_ten_great_in036101.html
somewhat related note: On top of the fact that Origin of Life researcher Jack Szostak, and others, failed to generate any biologically relevant proteins, from a library of trillions of randomly generated proteins, proteins have now been shown to have a 'Cruise Control' mechanism, which works to 'self-correct' the integrity of the protein structure from any random mutations imposed on them.
Proteins with cruise control provide new perspective: "A mathematical analysis of the experiments showed that the proteins themselves acted to correct any imbalance imposed on them through artificial mutations and restored the chain to working order." http://www.princeton.edu/main/news/archive/S22/60/95O56/
Cruise Control permeating the whole of the protein structure??? This is an absolutely fascinating discovery. The equations of calculus involved in achieving even a simple process control loop, such as a dynamic cruise control loop, are very complex. In fact it seems readily apparent that highly advanced mathematical information must somehow reside 'transcendentally' along the entirety of the protein structure, in order to achieve such control. This fact gives us clear evidence that there is far more functional information residing in proteins than meets the 'reductionist' eye. More clearly put, this ‘oneness’ of cruise control, within the protein structure, can only be achieved through quantum computation/entanglement principles, and is inexplicable to the reductive materialistic approach of neo-Darwinism!
Quantum states in proteins and protein assemblies: The essence of life? – STUART HAMEROFF, JACK TUSZYNSKI Excerpt: It is, in fact, the hydrophobic effect and attractions among non-polar hydrophobic groups by van der Waals forces which drive protein folding. Although the confluence of hydrophobic side groups are small, roughly 1/30 to 1/250 of protein volumes, they exert enormous influence in the regulation of protein dynamics and function. Several hydrophobic pockets may work cooperatively in a single protein (Figure 2, Left). Hydrophobic pockets may be considered the “brain” or nervous system of each protein.,,, Proteins, lipids and nucleic acids are composed of constituent molecules which have both non-polar and polar regions on opposite ends. In an aqueous medium the non-polar regions of any of these components will join together to form hydrophobic regions where quantum forces reign. http://www.tony5m17h.net/SHJTQprotein.pdf
For a sample of the equations that must be dealt with, to 'engineer' even a simple process control loop like cruise control for a single protein, please see this following site:
PID controller A proportional–integral–derivative controller (PID controller) is a generic control loop feedback mechanism (controller) widely used in industrial control systems. A PID controller attempts to correct the error between a measured process variable and a desired setpoint by calculating and then outputting a corrective action that can adjust the process accordingly and rapidly, to keep the error minimal. http://en.wikipedia.org/wiki/PID_controller
It is in realizing the staggering level of engineering that must be dealt with to achieve ‘cruise control’ for each individual protein, along the entirety of the protein structure, that it becomes apparent even Axe’s 1 in 10^77 estimate for rarity of finding specific functional proteins within sequence space is far, far too generous. In fact probabilities over various 'specific' configurations of material particles simply do not even apply, at all, since the 'cause' of the non-local quantum information does not reside within the material particles in the first place (i.e. falsification of local realism; Alain Aspect). =========== Here is a informative comment by gpuccio on rarity of protein folds: https://uncommondescent.com/darwinism/media-mum-about-deranged-darwinist-gunman/comment-page-5/#comment-363452 verse and music:
– 1 Peter 1:24-25 For, “All people are like grass, and all their glory is like the flowers of the field; the grass withers and the flowers fall, but the word of the Lord endures forever.” And this is the word that was preached to you. Marc Antoine Sunland - smooth jazz http://www.youtube.com/watch?v=OAAjpl23pAI
bornagain77
H'mm: Reviewer DM hits a mother-lode in the quotes mine: ____________ >> Dr. Abel seems unafraid to boldly address issues that have been rendered taboo by many in the scientific community. This lack of intimidation is further evident in statements as this on page 28: "Chance and necessity are completely inadequate to describe the most important elements of what we repeatedly observe in intra-cellular life, especially. Science must acknowledge the reality and validity not only of a very indirect, post facto natural selection, but of purposeful selection for potential function as a fundamental category of reality. To disallow purposeful selection renders the practice of mathematics and science impossible." This is an amazing acknowledgement coming from a prominent member of the science community. On page 33 we fine another example of Dr. Abel's bold frankness and piercing insight as he states: "Choice Contingent Causation (CCC) can generate extraordinary degrees of unique functionality that has never been observed to arise from randomness or necessity. Highly pragmatic choice contingency is consistently associated with purposeful steering toward potential utility. The kind of contingency associated with sophisticated cybernetic function is invariably associated with what philosophers of science call "agency." The hallmark of agency is the ability to voluntarily pursue and choose for potential function. Potential means "not yet existent." If anything is repeatedly observable in science, it is abundant evidence of agency's unique ability to exercise formal CCC in generating potential formal functionality. The only exception to human agency's unique ability to do this is life itself, which is of course what produces agency. Life itself is utterly dependent upon cybernetic programming--a phenomenon never observed independent of agency. Thus we are confronted with still another chicken-and-egg dilemma of life-origin science. Whatever the resolution of this riddle, one thing is for certain. We are forced to consider two kinds of contingency, 1) Chance contingency and 2) Choice contingency as fundamental categories of reality along with law-like necessity." On page 307, the end of chapter 8 where Dr. Abel's masterfully examines "The Birth of Protocells", he asks this question: "By what supposedly "natural" process did inanimate nature generate phenomena like 1) A genetic representational sign/symbol/token system? 2) Bona fide decision nodes and logic gates (as opposed to just random "bifurcation points")? 3) Physicodynamically-indeterminate (dynamically inert, incoherent) configurable switch-settings that instantiate functional "choices" into physicality? 4) formal operating system and the hardware on which to run such software? 5) an abstract encoding/decoding system jointly intelligible to both source and destination? 6) many-to-one Hamming "block codes" (triplet-nucleotide codons prescribing each single amino acid) used to reduce the noise pollution in the Shannon channel of genetic messages? 7) the ability to achieve functional computational success in the form of homeostatic metabolism? His conclusion: "All of these attributes of life are nonphysical and formal, not physical and natural. They cannot have a materialistic, naturalistic explanation." >> ____________ The strange repeated popping noise you hear is heads exploding with new ideas. GEM of TKI PS: if you see some very familiar themes in the above tied to necessity vs chance vs choice and to the idea of functionally specific complex organisation and information FSCO/I, it is no coincidence. kairosfocus
F/N: Shannon's metric is a metric of info-carrying capacity. Especially the H-metric, which boils down to being average info-cap per symbol. The underlying Hartley-derived metric, I = - log p, is effectively a stochastically based measure of the likelihood of symbols from an alphabet being used in a given code, estimated from typical samples. The H-metric is closely linked to the reduction in uncertainty about the state of an emitter of the symbols or signals, on receiving the same. This has led to an increasingly accepted informational view of thermodynamics; which is closely related to design theory, cf the discussion in my always linked note, here on. KF kairosfocus
J: The context of the Shannon metric is that information bearing signals are readily distinguishable from noise, so much so that a key measure in the targetted channel capacity theorem is the signal to noise ratio. That is, it is recognised intuitively that noise follows stochastic patterns subject to probability distributions and thermodynamic considerations. These make it maximally unlikely that the special configurations that constitute signals that are meaningful would occur by chance-driven processes rather than intelligent and purposeful choice. That is why I keep saying that design inferences are built into the core of information theory, and that we should not overlook that. This is a part of why I think the explanatory filter approach that characterises the aspects of what is going on in a comms system, is so useful. Nor, should we neglect that once we have a comms system, we have protocols implemented across the system, so that symbols or signals and rules for their manipulation to convey meaning are an underlying framework. This also flags up the irreducible complexity of a sender-receiver system that works to convey those messages or signals. When we detect such a thing, we should take pause,as the only credible, observed and empirically warranted explanation for such, where we can see the causal process directly, is intelligence. A capital example, of course, is that in the heart of the living cell we see the DNA --> mRNA --> Ribosome protein assembly system that uses exactly this sort of technology. (Cf here on, note vids.) GEM of TKI kairosfocus
gpuccio, thanks for taking the time to clear that up! bornagain77
It looks like a wonderful book. Abel, Durston, Chiu... Great team! I have already ordered it. gpuccio
Starbuck: People can already make a protein with only a small set of ancient amino acids. That's not what they have done. Read the paper, please. "As a first attempt, we designed randomized src SH3 gene libraries in which approximately half the residues of the SH3 gene were replaced by randomized codons in the lower or upper half of the table of the genetic code (Fig. 1)." ... "A subset of amino acids that are coded by the lower half of the genetic code are mainly putative primitive amino acids (e.g., Ala and Gly), whereas a subset of amino acids that are coded by the upper half contains many putative new amino acids (e.g., Cys, Phe, Tyr and Trp)." ... "Therefore, we used mRNA display to elucidate and compare the frequency of functional SH3 sequences in randomized SH3 libraries with different sets of amino acids." ... "First, we constructed partially (28 out of 57 amino acids)randomized SH3 gene libraries, SH3(RNN)28 and SH3(YNN)28, with randomized codons RNN (R = A or G; N = T, C, A or G) and YNN (Y = T or C), corresponding to the lower and upper half of the table of the genetic code, respectively (Fig. 1). We also prepared a randomized SH3 gene library SH3(NNN)28 with all 20 amino acids as a control. If particular amino acid residues are essential for a randomized position of the SH3 gene, the frequency of occurrence of functional proteins will be greatly affected. To exclude this possibility, the randomized codons were introduced into 28 out of 57 amino acid residues of the src SH3 domain and not in the highly conserved residues of the SH3 domain." So, just to be clear: a) They did not make any protein at all. They randomly substituted the non conserved codons (about half of the total) in an existing domain. b) Therefore, it is obvious that in no way the resulting proteins were made of "only a small set of ancient amino acids", given that half of the sequence (the conserved codons) was in its original form. It seems to be common practice for darwinists to quote from interesting papers attributing them non existing meanings and unwarranted conclusions, for their propaganda. Sometimes that is done by the authors themselves. More often, as in this case, by "distracted" readers. gpuccio
I suppose I'll need something to read as I travel back to 1990. xp material.infantacy
Information must not be confused with meaning? How confusing is that? It leads to such absurdities as "meaningless information," whatever that is. Mung
lol. Well, the Nature of Nature had been in the works for a while. Mung
No Kindle edition -- aaarrrrgggghhhh!!!!! Add it to the list of books that need to come into the current millennium, along with The Myth Of Junk DNA and The Nature Of Nature. material.infantacy
Shannon Information – Channel Capacity – Perry Marshall – video http://www.metacafe.com/watch/5457552/ bornagain77
I have no idea what you mean by shannon channel capacity "dicating" that the original DNA code had to be at least as complex as the current one, it is not even possible to know such a thing without experimentation. And the experimentation I referenced shows you can get a rather versatile sh domain from a limited set of amino acids thought to be very ancient (the first ones). With regard to "changing the code once it's in place" , we see alternative codes in bacteria and mitochondria, so the code isn't frozen in place and can change. Starbuck
Starbuck, not sure what you wrote has anything to do with anything. M. Holcumbrink
Hi Eric, Both you and Joeseph are spot on. The mis-use and associated rhetoric come up all the time. (second paragraph) Upright BiPed
Go David. Upright BiPed
starbuck, Shannon channel capacity dictates that the original DNA code had to be at least as complex as the current 'optimal' one in use today. Thus negating the assertion of the paper you cited that a simpler/smaller set of 'putative ancient' amino acids was previously in use gradually building to the 20 amino acid set in general use today with the DNA code.,,, The two papers cited afterwards shows, 1, the 'optimality' of the amino acid set in the codon code (which begs the question of how did it arrive at optimality if Shannon channel capacity prevents such changes in the DNA code once it is established) and, 2, highlights the catastrophic effects of changing the code once it is in place. bornagain77
Not sure what either of those two have to do with what I wrote. Starbuck
Starbuck, Well besides the study ignoring Shannon channel capacity (among other things) there is this:
Does Life Use a Non-Random Set of Amino Acids? - Jonathan M. - April 2011 Excerpt: The authors compared the coverage of the standard alphabet of 20 amino acids for size, charge, and hydrophobicity with equivalent values calculated for a sample of 1 million alternative sets (each also comprising 20 members) drawn randomly from the pool of 50 plausible prebiotic candidates. The results? The authors noted that: "…the standard alphabet exhibits better coverage (i.e., greater breadth and greater evenness) than any random set for each of size, charge, and hydrophobicity, and for all combinations thereof." http://www.evolutionnews.org/2011/04/does_life_use_a_non-random_set045661.html
as well as this,
Venter vs. Dawkins on the Tree of Life - and Another Dawkins Whopper - March 2011 Excerpt:,,, But first, let's look at the reason Dawkins gives for why the code must be universal: "The reason is interesting. Any mutation in the genetic code itself (as opposed to mutations in the genes that it encodes) would have an instantly catastrophic effect, not just in one place but throughout the whole organism. If any word in the 64-word dictionary changed its meaning, so that it came to specify a different amino acid, just about every protein in the body would instantaneously change, probably in many places along its length. Unlike an ordinary mutation...this would spell disaster." (2009, p. 409-10) OK. Keep Dawkins' claim of universality in mind, along with his argument for why the code must be universal, and then go here (linked site listing 23 variants of the genetic code). Simple counting question: does "one or two" equal 23? That's the number of known variant genetic codes compiled by the National Center for Biotechnology Information. By any measure, Dawkins is off by an order of magnitude, times a factor of two. http://www.evolutionnews.org/2011/03/venter_vs_dawkins_on_the_tree_044681.html
bornagain77
People can already make a protein with only a small set of ancient amino acids. See Tanaka J et al. PLOS ONE 6, e18034, 2011. Starbuck
Joseph, you are exactly right, which is why I don't have a problem with calling it "Shannon information." I think what Abel is getting at, however, is that the general public, including many materialists who are attempting to attack ID, use the concept of Shannon "information" precisely in that ordinary usage sense. This demonstrates that they don't understand what Shannon information is, but the term is still a problematic rhetorical hatchet when used wrongly. My guess is what Abel is getting at is that we'd be better off continuing to use the word "information" the way most people think of it, and, therefore, come up with some other term for Shannon "information." Based on what we know about information today, I'd say I have to agree -- Shannon probably should not have referred to it as "information" as that just confuses the issue and makes it seem that his theory covers more than it does. Or we can continue to call it "Shannon information" and then at every turn just explain to people (like Weaver had to do) that we really aren't talking about information in the sense that anybody typically uses the word. Oh, well. I don't suppose there is any possibility of changing the term now, so we'll just have to keep using the term and educate people that it doesn't mean what they might think at first. Eric Anderson
further note:
Three subsets of sequence complexity and their relevance to biopolymeric information - Abel, Trevors Excerpt: Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC).,,, Testable hypotheses about FSC What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses: Null hypothesis #1 Stochastic ensembles of physical units cannot program algorithmic/cybernetic function. Null hypothesis #2 Dynamically-ordered sequences of individual physical units (physicality patterned by natural law causation) cannot program algorithmic/cybernetic function. Null hypothesis #3 Statistically weighted means (e.g., increased availability of certain units in the polymerization environment) giving rise to patterned (compressible) sequences of units cannot program algorithmic/cybernetic function. Null hypothesis #4 Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time. We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. This renders each of these hypotheses scientifically testable. We offer the prediction that none of these four hypotheses will be falsified. http://www.tbiomed.com/content/2/1/29
One part of Shannon information theory that is useful for Intelligent Design though is this part. Claude Shannon's work on 'communication of information' actually fully supports Intelligent Design, since the first 'optimal' DNA code has to be at least as complex as the present 'optimal' DNA code we find in life, as is illustrated in the following video and quotes:
Shannon Information - Channel Capacity - Perry Marshall - video http://www.metacafe.com/watch/5457552/ “Because of Shannon channel capacity that previous (first) codon alphabet had to be at least as complex as the current codon alphabet (DNA code), otherwise transferring the information from the simpler alphabet into the current alphabet would have been mathematically impossible” Donald E. Johnson – Bioinformatics: The Information in Life Deciphering Design in the Genetic Code Excerpt: When researchers calculated the error-minimization capacity of one million randomly generated genetic codes, they discovered that the error-minimization values formed a distribution where the naturally occurring genetic code's capacity occurred outside the distribution. Researchers estimate the existence of 10 possible genetic codes possessing the same type and degree of redundancy as the universal genetic code. All of these codes fall within the error-minimization distribution. This finding means that of the 10 possible genetic codes, few, if any, have an error-minimization capacity that approaches the code found universally in nature. http://www.reasons.org/biology/biochemical-design/fyi-id-dna-deciphering-design-genetic-code
Perhaps its time for Richard Dawkins to call on his extra-terrestrial designers? video and music:
Richard Dawkins admits to Intelligent Design - video http://www.youtube.com/watch?v=BoncJBrrdQ8 AWOLNATION - "SAIL" (Official Video) http://www.youtube.com/watch?v=PPtSKimbjOU
bornagain77
Notes:
Programming of Life - Information - Shannon, Functional & Prescriptive - video http://www.youtube.com/user/Programmingoflife#p/c/AFDF33F11E2FB840/1/h3s1BXfZ-3w The Capabilities of Chaos and Complexity: David L. Abel - Null Hypothesis For Information Generation - 2009 To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: "Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration." A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis. http://www.mdpi.com/1422-0067/10/1/247/pdf Can We Falsify Any Of The Following Null Hypothesis (For Information Generation) 1) Mathematical Logic 2) Algorithmic Optimization 3) Cybernetic Programming 4) Computational Halting 5) Integrated Circuits 6) Organization (e.g. homeostatic optimization far from equilibrium) 7) Material Symbol Systems (e.g. genetics) 8) Any Goal Oriented bona fide system 9) Language 10) Formal function of any kind 11) Utilitarian work http://mdpi.com/1422-0067/10/1/247/ag The Law of Physicodynamic Insufficiency - Dr David L. Abel - November 2010 Excerpt: “If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise.”,,, After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided. The time has come to extend this null hypothesis into a formal scientific prediction: “No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone.” http://www-qa.scitopics.com/The_Law_of_Physicodynamic_Insufficiency.html The Law of Physicodynamic Incompleteness - David L. Abel - August 2011 Summary: “The Law of Physicodynamic Incompleteness” states that inanimate physicodynamics is completely inadequate to generate, or even explain, the mathematical nature of physical interactions (the purely formal laws of physics and chemistry). The Law further states that physicodynamic factors cannot cause formal processes and procedures leading to sophisticated function. Chance and necessity alone cannot steer, program or optimize algorithmic/computational success to provide desired non-trivial utility. http://www.scitopics.com/The_Law_of_Physicodynamic_Incompleteness.html
bornagain77
In contrast, Shannon information, according to Abel, shouldn’t even be called “information” because it’s really a measure of a reduction in certainty, and by itself cannot do anything to “prescribe or generate formal function.” :
The word information in this theory is used in a special mathematical sense that must not be confused with its ordinary usage. In particular, information must not be confused with meaning.- Warren Weaver, one of Shannon's collaborators
Joseph

Leave a Reply