Uncommon Descent Serving The Intelligent Design Community

Extra Characters to the Biological Code

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Even if compressed I’ve always thought that the known informational content was not enough data. This makes sense because from an engineering point of view because there doesn’t seem to be enough data storage space in a few billion base pairs of nuclear DNA to specify all the detail in a mammal or similarly complex animal. It’s enough room to store a component library of the nuts and bolts required to build individual cells of different types but not the whole animal.

Obviously no one can argue against the assertion that we do not fully comprehend the biological code. Unlike with computer code we cannot simply determine at a glance which informational content defines what biological function. The title of geneticist Sermonti’s book is “Why a Fly is not a Horse”. In it he writes the only thing we know for certain about why a horse is a horse and not a fly is because its mother was a horse.

Thus, based on our current level of knowledge, any calculations that quantify biological informational content are going to be rough estimates. Personally, when measuring the functional sequence complexity of code encoding proteins I’ve long biased any calculations I do by rounding up to several extra informational bits. And this action seems justified by this recent news:

“Anyone who studied a little genetics in high school has heard of adenine, thymine, guanine and cytosine–the A, T, G and C that make up the DNA code. But those are not the whole story. The rise of epigenetics in the past decade has drawn attention to a fifth nucleotide, 5-methylcytosine (5-mC), that sometimes replaces cytosine in the famous DNA double helix to regulate which genes are expressed. And now there’s a sixth: 5-hydroxymethylcytosine.

In experiments to be published online April 16 by Science, researchers reveal an additional character in the mammalian DNA code, opening an entirely new front in epigenetic research.

The work, conducted in Nathaniel Heintz’s Laboratory of Molecular Biology at The Rockefeller University, suggests that a new layer of complexity exists between our basic genetic blueprints and the creatures that grow out of them. “This is another mechanism for regulation of gene expression and nuclear structure that no one has had any insight into,” says Heintz, who is also a Howard Hughes Medical Institute investigator. “The results are discrete and crystalline and clear; there is no uncertainty. I think this finding will electrify the field of epigenetics.”

Genes alone cannot explain the vast differences in complexity among worms, mice, monkeys and humans, all of which have roughly the same amount of genetic material. Scientists have found that these differences arise in part from the dynamic regulation of gene expression rather than the genes themselves. Epigenetics, a relatively young and very hot field in biology, is the study of nongenetic factors that manage this regulation.”

Go to Science Daily for more.

Comments
AD, I will look that up. Today I checked out the April 10 issue of Science from the library. I haven't gotten to Ingolia yet, but will. I've been wanting to read more about protein signaling, and there's an article about it on p.198 (Smock & Gierasch). womanatwell
womanatwell, you may be applying the term microevolution too loosely. How "closely related" were the two Hydra species examined in the paper? Take a look at the phylogenetic trees in reference 34 (Hemmrich G, et al., Molecular phylogenetics in Hydra, a classical model in evolutionary developmental biology. Mol Phyl Evol. 2007;44:281–290) and you will see that H. oligactis and H. magnipapillata are not sister species. (Not so very closely related.) Incidentally, your reference to Behe and TEOE reminded me of gpuccio's statement at #48:
But there is another approach which gives us a more realistic idea of where we are with darwinian explanations. Behe in TEOE has suggested that, in natural models like malaria, random mutations can, at best, provide two coordinated necessary mutations under a very strong selective pressure.
I have been looking into Behe's claims about the chloroquine resistance data and how they relate to his "edge," and I find them questionable... Adel DiBagno
gpuccio,
Some time ago I was in favor of a completely gradualistic design implementation, except for OOL. But the data about the two “explosions”, and possible others, have convinced me that probably design has been implemented with different modalities in natural history: sometimes more gradually, sometimes more suddenly.
I agree. I even am starting to wonder about microevolution, since they are finding species-specific unique genes, as in: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2586386 As Behe says in TEOE, HIV continually mutates, but remains HIV. womanatwell
gpuccio, Good to see that you are back. To avoid distraction, I'll reserve further comment until you have come up with support for your claim that
There are even paper in the literature trying to support that (I will give you the reference to the most important one later, but you probably know it, it’s the one about generating functional calcium binding proteins from a random set of sequences).
Sorry, I don't recognize that reference. Adel DiBagno
womanatwell: Very good arguments. And thank you for the links. ATP synthase is one wonderful example of functional complexity, but it's only one of the many available. At present, it seems that a lot of fundamental proteins had to "be there" very early. I am convinced that life started very complex and organized. All OOL theories are absolute myths: I can only pity darwinists who have to try to explain what cannot be explained (at least, not their way). OOL is an example of sudden emergence of complexity. The ediacara and cambrian explosions are two more. While in general speciation can be though as more gradual, even from an ID point of view, for these three great steps graduality is practically prohibited by facts themselves, as we know them. Some time ago I was in favor of a completely gradualistic design implementation, except for OOL. But the data about the two "explosions", and possible others, have convinced me that probably design has been implemented with different modalities in natural history: sometimes more gradually, sometimes more suddenly. The transition from prokaryotes to eukaryotes is another good candidate for "acute" design implementation. All these are issues which can be partially clarified as our understanding of natural history improves. gpuccio
AD, thanks. The thing about ATP synthase is that it's in all three domains--Archaea, Bacteria and Eukaryotes, so would have been there before any branching. It needs a working membrane so that there is an osmotic/electrochemical pull on the hydrogen protons from one side to the other. The energy is converted to the high-energy phosphate bond of ATP. It is used in just about all the cell's metabolism, including the construction of DNA, RNA and proteins. womanatwell
womanatwell, Nice references. Rotary motors! You have made me and gpuccio happy. Adel DiBagno
gpuccio [73]: Hasta la vista. Thanks for considering me reasonable. Can't fight the facts. That would be unprofessional. Adel DiBagno
Adel: I will be away for a couple of days. I am very sorry that Nakashima has been banned. I was not aware of that. You ask: "What is the source of your belief about the hopes of Darwinists?" Th hope for "huge functional spaces" has been expressed many times by darwinists here in the course of discussions. It is usually expressed as the conviction that big "slopes" exist which easily allow to pass by random variation from one island of functionality to another one. There are even paper in the literature trying to support that (I will give you the reference to the most important one later, but you probably know it, it's the one about generating functional calcium binding proteins from a random set of sequences). Those papers have been many times quoted against ID, and against my personal arguments in particular. So, I don't think I am making that up. But if you agree with me that: "the fraction of possible protein sequences and protein domains that are functional is a very small fraction of the total search space" then I am very happy of that. It confirms my idea that you are a reasonable guy :-) gpuccio
Here's a picture of ATP Synthase from RCSB Protein Data Bank: http://www.rcsb.org/pdb/static.do?p=education_discussion/molecule_of_the_month/pdb72_1.html womanatwell
Adel DiBagno, I'm honored that my presence is requested. An important aspect to consider in the possible usefulness of proteins is the ability for them to fold into usable shapes. The ATP synthase is a collection of at least 8 types of proteins that are perfectly shaped to work togehter. Some are used once, some 3 times, some more than 3 times. They act together to capture an ADP molecule, and add a phospate to it to produce ATP, the energy molecule of the cell. In one of the simplest, that of E. Coli, there are a total of 6000 amino acids. This energy source had to be there pretty early on. You can read from the abstract of this paper: http://www.pnas.org/content/100/23/13270.abstract?cited-by=yes&legid=pnas;100/23/13270 that protein folds are not easy to come by. In nature, collections of amino acids that fold are rare, much less folding into just the right shape. womanatwell
What part of transcription and translation- complete with proof-reading, error-correction and editing, strikes you as being cobbled together via an acumulation of genetic accidents? And How can we test the premise that a bacterial flagellum, for example, arose from a population that never had one via an acumulation of genetic accidents? Joseph
gpuccio, I have seen on the Poofery thread that Mr Nakashima has been banned. I am sorry to learn that, because I had hoped that more personalities could be engaged in this discussion. I hope that womanatwell will come back. Anyway, your explanation of Durston et al. was most lucid and helpful. I will defer further questions and comments about that contribution to FSCI because I want to focus for the moment on a closely related issue. You said in #56:
The size of the target space for a specific function is the most difficult variable to assess, even as an order of magnitude. Indeed, at present no one can define it with certainty. That’s where the opinions of IDists and darwinists necessarily diverge: we do believe that the target space, however big, is anyway a tiny fraction of the search space. Darwinists do hope in huge functional spaces, and profit as much as they can of the present partial ignorance about the relationship between protein structure and function. But one thing is certain: this is an issue which is going to be clarified, and in a relatively short time. So, this particular “gap” in our knowledge will be filled, and we will see who is right.
(My emphasis) What is the source of your belief about the hopes of Darwinists? (References, please.) I don't remember when I learned that the fraction of possible protein sequences and protein domains that are functional was a very small fraction of the total search space, just as the fraction of viable life forms is a fraction of the total conceivable search space pertaining thereunto. But I've known those things for quite a while, and I'm not especially perceptive. So that issue seems already to have been clarified, and both sides are right. Adel DiBagno
What does ID have to offer? 1- That living organisms are NOT reducible to matter, energy, chance and necessity 2- That the DNA sequence is NOT the information 3- That like all other designs we can study and figure out this one so that we can better maintain it. Joseph
Adel, My conclusion is spot on. Otherwise you would just put up the data. So until you answer my questions don't be asking anything of me. Joseph
Adel: By the way: no insult taken. Ignorance is more something of a compliment for me :-) gpuccio
Adel: Durston's FSC is a measure of FSCI, only the method of measurement is different. In the traditional approach, to measure FSCI in a protein, you have to know bith the search space (which is simple) and the target space (which is difficult), and then you have to calculate the ratio of the second to the first. In the Durston approach, you consider not a single protein, but a big family of proteins with the same function and similar structure. Then you align all the primary structures, and compute the H (uncertainty) for each position, according to how much that position varies in the family. So, if an aminoacid is alwasy the same in all the proteins, the H will be the least, and the reduction of uncertainty with respect to the ground state will be the highest. IOW, that position can only host that specific aminoacid, if the function has to be conserved, and contributes very much to the total functional information. On the contrary, if one position is occupied preferentially by 2 or 3 amonoacids, amd rarely by a few others, its informative power will be less. Finally, if a position can be occupied with the same frequency by any of the 20 aminoacids, its H will be as high as in the ground state, and therefore its contribution to H the reduction of uncertainty will be null. IOW, that position has no fucntional informative value. In the ground state (a random, non functional sequence of the same length) H will be the highest. So, the highest value of H per position is log 20 (in base 2), that is 4.32 bits. If a position bears always the same aminoacid, its H will be 0, and so the uncertainty reduction will be of 4.32 bits, ans so the Fit value for that position. If a position changes more, its Fit value will be lower. If a position changes randomly, its H will be 4.32, and its Fit value 0. The total Fit value for a molecule is obtained by the sum of the individual Fit values per position. The average Fit value per position is obtained by dividing for the number of positions. So, let's see the example of Ribosomal S12 protein family, cited in the paper. The protein is 121 AAs long. So, the ground state (a random sequence of that length) has an H value of about 523 bits, corresponding to the size of the whole search space of 20^121 sequences. The Fit value of the protein family is 359 bits (not 379: there is an error in the text). That means that the H value of the functional state (the protein family) is about 164 bits. So, the reduction of H from the ground state is 523 - 164 = 359 bits. That's the Fit value for the protien family. What does that mean? a) 523 bits is the H of the ground (random) state, which corresponds to the whole search space of 20^121 (about 10^157) b) The H of the protein family (the functional state) is much lower: only 164 bits, which corresponds to about 10^49- IOW, only 10^49 sequences of that length are expected to express that function. That is an "indirect" way of measuring the target space, and the true wonderful intuition in the method. c) The difference, 359 bits, expresses the functional information of the molecule in Fits. Please note that it is the same as the ratio of the target space (10^49) to the search space (10^157): 10^-72 (-log of that is 359 bits). So, the value in Fits expresses exactly the probability to find the target space in the search space by a random search. For this molecule, that probability according to the above method is of 1:10^72. As I have arbitrarily set my threshold to reject any random hypothesis in the biological context at 1:10^30 - 1:10^50, with my criteria such a molecule is of 40 - 20 orders of magnitude beyond the threshold. IOW, unless a credible necessity mechanism is offered for its emergence (that is, a detailed series of selectable sub modifications starting for another previously existing protein with another completely different function), the best explanation at present is that it is designed. Is everything clear? This is a method of analysis. It is simple. It is quantitative. It can be easily applied to what we know. Is it perfect? Certainly not. It is obviously based on many assumptions. What I believe is that, if and when we have all the data to calculate the target space "directly", IOW to know with certainty how many sequences of a certain length can express a specific function, the Fit value of those proteins will be shown to be higher (there is a reason for that belief, but for the moment I will not debate it). But the method is here, and it can be applied, and it definitely measures, although with some approximation and probably error, the informational content of known proteins, which, as you can see, is not a myth or a vague argument, but a precise reality. gpuccio
womanatwell: In the model we are interested in, that is proteins, there should not be so much a problem like the one you suggest. Protein function is usually tied to a specific 3D structure and active site conformation. Usually, if two proteins have the same function in different species, it is very likely that their 3D structure is similar. So, we could define a function as connected to a 3D structure. If there are proteins with similar function, but completely different structure, they could be treated separately. Essentially, the functional information is necessary to get the correct folding "and" the correct active site. It is interesting that the relationship between primary structure and tertiary structure is very complex, and difficult to compute. For instance, myoglobins and related molecules have almost the same structure (and function) in very distant species, and yet the primary structure is sometimes very different. The Durston method has the great value of easily assigning an "average" value to each aminoacid in terms of H reduction, but it is obviously an approximation. Sometimes, an aminoacid can change without influencing the function only if many other coordinated changes occur at the same time. It is interesting that what we observe in protein families is conservation of function, and somtimes adaptation of it to different environments (what we coulld call "fine tuning" of the function), rather than "evolution" of the function. One of the great surprises of recent sequencing of genomes is that many proteins are very old, and are already present in "simple" organisms, where their function is difficult to understand (see for instance the paper "Sea Anemone Genome Reveals Ancestral Eumetazoan Gene Repertoire and Genomic Organization"). And, at the same time, practically all species reveal also species specific proteins, which have apparently no known homologues. So, we have two different and serious problems for which the current darwinian paradigm has really no convincing answer: 1) How could so many different proteins with so many different functions and structures, "evolve" so efficiently as to be already present even in the first stages of life? Darwinian theory can only avoid that problem by searching refuge in the misty mythologies of OOL "theories". 2) How could so many species "evolve" specific new proteins, without a trace of homologues in similar species? Darwinists can only hope that, in time, such homologues will be found. I believe they will not. gpuccio
Joseph, Your conclusion is unwarranted. By the way, where did you study marine biology? Do you have any publications? Adel DiBagno
gpuccio, Thanks for the links. They work. I will read them and try to understand the issues. In the interim, I've been reading and re-reading the Durston et al. paper and I have to confess that I don't follow the math. Any help you'd care to give in explaining the authors' argument would be welcome. I would especially appreciate an explanation of how the measure they term FSC relates to FSCI as measured by you or your colleagues here. Table 1 lists FSC (in Fits) for 35 protein families, in values ranging from 46 Fits to 2,416 Fits. What are we to make of those numbers? (How do those numbers relate to the argument from design?) Please excuse any apparent delays in responses by me. I've been placed in moderation for a perceived insult to you in an earlier post. I didn't intend my reference to ignorance as an insult, and I hope that you didn't take it that way. We are all ignorant of most things. I enjoy this site as a way to reduce my ignorance. Adel DiBagno
gpuccio, Sorry, don't know what key I hit to cut me off. Thanks for your explanations and links. I read the Durston paper and will read the discussions, but that will take a while. Since we are here on the thread, I'd like to ask how the comparisons of completely different molecules with the same functions can help quantitatively. For example, how can you really compare a typewriter quantitatively with a computer-printer system? Even though they both print letters, they are so different that I don't see how you narrow down the space of possible letter-printing machines. BTW, "Fit" is a wonderful term for functional bit. womanatwell
gpuccio, womanatwell
Nakashima, Adel: Regarding the links to past discussion about quantitative calculation of FSCI. One of the best discussions (IMO) on that subject took place recently on Mark Frank's blog, in a more "intimate" environment. I think many good points were expressed there by both parts, thanks also to the contribution of very good "adversaries", such as Mark himself, Zachriel and others. Here is the link: The Clapham Omnibus and the thread, titled "Let's calculate some CSI", is of January 3, 2009. A very good discussion about the Dirston approach was conducted here by Durston himself, until for some "mysterious" reasons he had to "go away". I proposed to go on with the discussion, but nobody complied. The discussion started at this thread, of January 28, 2009: Mathematically Defining Functional Information In Biology and went on on this other one, of February 3, 2009: Durston Cont’d (I hope the links work...) But the subject has been discussed many times here, and under very different perspectives. These are just the examples I remember best. gpuccio
You are sorry, Adel- otherwise you would have answered the questions. That you didn't pretty much demonstrates you can't. And that you can't proves my point. Joseph
The point is that YOUR position doesn’t have anything to offer besides “it evolved”.
Sorry, Joseph, The point is that your position has nothing to offer. Adel DiBagno
Nakashima: I will try to sum up very briefly my views about calculation of FSCI in proteins. At present there are at least two different approaches. 1) The first, and more fundamental, is to define a function in a context (you are perfectly right, every function is defined in a context). So, for instance, let's consider an enzyme which catalyzes a specific reaction in a cell. The function is then defined as a minimum level (arbitrarily fixed) of enzymatic activity in that cell environment. Then we look at the protein length, and define as search space the total space of configurations of that length L, that is 20^L (that is an approximation, because obviously shorter and longer proteins can have the same activity, but it is simpler to reason about a fixed length, at least as a first approach). Up to now, everything is simple. Now comes the difficult part. We have to calculate, at least approximately, the subset pf the search space which has the defined function at the defined minimum level. Let's call that the target space. The size of the target space for a specific function is the most difficult variable to assess, even as an order of magnitude. Indeed, at present no one can define it with certainty. That's where the opinions of IDists and darwinists necessarily diverge: we do believe that the target space, however big, is anyway a tiny fraction of the search space. Darwinists do hope in huge functional spaces, and profit as much as they can of the present partial ignorance about the relationship between protein structure and function. But one thing is certain: this is an issue which is going to be clarified, and in a relatively short time. So, this particular "gap" in our knowledge will be filled, and we will see who is right. Once we have an approximate idea of the size of the target space, the rest is easy. The rate of the target space to the search space expresses well enough the probability to access the target space by a random search "from scratch", under the very reasonable assumption of an uniform distribution for the nucleotide sequences in a random biochemical system. At that point, that probability has to be compared with the existing probability resources in the assumed biological model (available time, reproduction rate, population size, etc.), or, more simply, a threshold can be assumed as low enough to reject the random hypothesis in "any" biological context (I have suggested that such a threshold could be fixed at about 10^-30 or 10^-50 for the biological context. Let's remember that Dembski's UPB of 10^150 was an extreme value intended to cut out any random hypothesis in the whole known universe...). One thing should be clear. The above model refers to calculations of absolute FSCI for one protein, for one specific function, and assuming a random generation from scratch. And it does not take into account any necessity mechanism, like NS. IOW, the above scenario is more appropriate for a partial analysis of OOL scenarios. So, let's go to different mechanisms which could more directly interest darwinists. We can apply the same principles to calculate the variation in FSCI in an evolutionary transition. But to do that, a specific evolutionary transition has to be proposed. IOW, if darwinists decide to propose a specific model of protein transition (something like: this protein superfamily derived from this other one in such and such time, in such and such population, with such and such mutation rate), then such a model can be quentitatively tested. In that case, we have to calculate the minimum necessary variation which transforms protein and function A into protein and function B: the search space and the target space will be defined for that variation, and not for the whole protein, and again the probability will be assessed of a random emergence of such a functional variation, and compared with the available probabilistic resources in the defined model. And NS? Well, it can be incorporated in the model easily. Once darwinists define explicitly what has been selected and why at the different steps, we can do again the calculation for each gap between selectable steps, and compare the probability to available resources for that step. In any case, any explicit model can be tested quantitatively, even if we may have to wait some time to get the necessary details to test it. But generic models, "just so stories", can never be tested. 2) Let's go to the second approach: in a way, it's easier, and bypasses some of the difficulties in the above approach. It's the "Durston" method, of calculating the variation in Shannon's H in protein families, and deduct FSCI from that. I will not go into detail about that, and just refer you to the Durston paper, already cited in a previous post. gpuccio
Mr Nakashima [54], I think you have me confused with gpuccio. I'm as curious as you are. Adel DiBagno
Mr DiBagno, I am sadly ignorant of how to calculate FSCI and or FCSI. So I have to ask, if I want to calculate the FSCI of another protein, do I assume anything about the prior probability of oxytocin (and many other small proteins)? Do I assume the solvent is water of a specific temperature and pressure? It seems to me that function is very much dependent on context. Nakashima
Adel, What part of transcription and translation- complete with proof-reading, error-correction and editing, strikes you as being cobbled together via an acumulation of genetic accidents? And How can we test the premise that a bacterial flagellum, for example, arose from a population that never had one via an acumulation of genetic accidents? The point is that YOUR position doesn't have anything to offer besides "it evolved". Joseph
Khan:
true enough, but if the egg had as much control over development as you claim, you would expect the cloned gaur to have at least some cow-like features. but it did not.
A gaur is very cow-like. Or are you saying that it should have been more feminine? And BTW cartilage is NOT bone. And that means there isn't any such thing as "cartilaginous bones". Thanks for proving that you are either dishonest or have absolutely no clue. Joseph
...Rybczynski is either a more common name than I thought...
I wonder whether Hazel is distantly related. Thanks for the link. Interesting stuff! Alan Fox
gpuccio, Thank you for your informative #48. Out of courtesy I will defer comment pending the outcome of your discussion with Mr Nakajima concerning FCSI. Nakajima [42]:
Yes, I must have missed those detailed and quantitative discussion of calculating FCSI and or calculating FSCI.If you can provide links I will try to catch up.
To which gpuccio replied [44]:
And I will try to give you the links, as soon as I have time to find them.
I also would appreciate perusing your links. (I apologize for jumping in with my recollection about the FCSI of oxytocin. Please carry on.) Adel DiBagno
Hi, Sorry for the off topic comment, but I just wanted to bring your attention to this site about the new fossil seal. It looks like Rybczynski is either a more common name than I thought it was, or being banned from UD leaves you copious free time to roam the high arctic in search of fossils. Congratulations Natalia! Nakashima
Nakashima, Adel: Oxitocyn is a small peptide. Again, please look at the numbers in the Durston paper, you will find much more interesting facts. And the UPB is not, IMO, a good threshold to infer design. Personally, I would accept 1:10^30, or if you want up (or down) to 1:10^50 as much more useful thresholds for the biological context. But there is another approach which gives us a more realistic idea of where we are with darwinian explanations. Behe in TEOE has suggested that, in natural models like malaria, random mutations can, at best, provide two coordinated necessary mutations under a very strong selective pressure. Behe's arguments have been disputed (IMO, without any reasonable confutation). But that's not the point. The point is, let's say that Behe is partially wrong, and that in natural contexts like malaria (big populations, very strong and specific selective pressure, and very high reproductive rate) 3, or even 4 coordinated necessary mutations may, sometimes, happen. Where are we with darwinian theory, then? We are completely nowhere. 3 or 4 coordinated mutations cannot build anything of importance. I will be generous. Try to build a path from one protein to another one (different proteins, different superfamilies, different domains) by any sequence of 4 coordinated mutations, where each step is definitely selectable. That would be a model. Adel, you "have been following this site since Kitzmiller et al. in 2005". Well, I have been following scientific literature for even longer, and I am still waiting for such a model to be presented. Please, remember that all the models of microevolution we know of with some credibility are of 1, or at best 2 coordinated mutations. That is your explanatory power. That is your model. Those are your "testable hypotheses. You have nothing. The research I think of is already coming out everywhere. All biological research is ID research, because, you see, ID is not "another" theory which should change biology. ID is the only way to make sense of biology as it is. We only have to get rid of the pseudo-scientific interpretations which have corrupted the theory of origins of biological information, not biology itself. It's a shift of perspective, but the facts remain the same facts. But our interpretations will change, and will change much faster as our accumulation of facts increases. I don't understand what you expect from ID. Personally, I expect nothing from darwinian theory, and very much from biological research. That is a fundamental difference, I believe, between us. For me, research is research: it is not done to defend darwinism or to defend ID. It is done to understand reality. The more we know, the more we can understand, even if we don't spend all our time and resources to demonstrate that we are politically correct and that we strictly obey Popper, or Kuhn, or Feyerabend, or the next maitre à penser of the year. So, biological research is biological research, and it is for all, and it is owned by nobody. The more we know, the more we can understand. If ID is correct in its assumptions, the more we know the more ID will be obviously the best explanation. The opposite should be true if darwinian theory is correct. That's the only prediction which really counts. So, you can ask: "How is that research coming along?". But my simple answer is: it has been coming along for decades, and still it is coming along day by day. In the papers you quote, and in all the others you and myself don't even know of. You believe that ID is in the gaps. I believe the opposite: ID is in the details, in the numbers, in the defined paths, in the accurate description of how intelligently designed things work. That is and will be the strength of ID. In the meantime, let's see how the darwinists of the gaps try to cross the 2 (or, if you want, the 4) aminoacids boundary... gpuccio
Mr Nakashima, Regarding FSCI of proteins, I remember a thread a couple of months back in which gpuccio calculated the FSCI of oxytocin. As I recall, the number was small enough to fit well within the universal probability bound, but that did not resolve the question of whether it was designed. Adel DiBagno
gpuccio [40]: I enjoy your good humor, especially when you disarm my crankiness.
Explanation 1...Research will show who is right. Explanation 2...There are, certainly, many things we don’t know: who is the designer, how does he implement the information, and so on. But those are not gaps. They are only things to research.
How is that research coming along? I have been following this site since Kitzmiller et al. in 2005. So far, I have seen no progress in the arguments, nor any testable (or tested) hypotheses. Compare Astrology, which led to Astronomy, and Alchemy, which led to Chemistry. But I have hopes, and I'll keep on eye out. Adel DiBagno
womanatwell: ID can certainly approach the quantitative analysis of FSCI in different kinds of proteins. That's the easiest model to deal with. Regarding a cell state, I would say that its FSCI is certainly "at least" as great as the sum of its minimum necessary proteins. That's big enough, just to start. Obviously, there are higher levels of organization which are not so easy to analyze quantitatively. That's why for the moment nobody has really attempted to do that. But those level have almost certainly higher FSCI than the simple protein sequences, and moreover that FSCI is "in addition" to the simple FSCI of proteins and DNA genes. For the quantification of FSCI in proteins, I always recommend the Durston paper (and method) as an interesting start: "Measuring the functional sequence complexity of proteins". It's freely available on the web. gpuccio
Nakashima:
But further, is it your position that every speciation event (under some non-tautological definition of species) is the result of intelligent intervention? That is how I understand your Explanation 2, above.
Yes, it is. And I will try to give you the links, as soon as I have time to find them. gpuccio
gpuccio,
...in ID (and certainly in my posts) you will find so many arguments about protein complexity and the related probability resources, and much less, say, about the general rfegulation of transcriptomes in multicellular beings.
Would you consider an initial cell state as something ID can analyze. It seems a crucial question about viability of probionts is whether they can exist without the cell membrane regulation of pH, osmotic pressure, electrolytes, and other mechanisms of transport, which use specific membrane transport proteins along with a selectively permeable membrane. Perhaps you have a link for work already done in this area relating to FSCI and/or CSI. womanatwell
Mr Gpuccio, Yes, I must have missed those detailed and quantitative discussion of calculating FCSI and or calculating FSCI.If you can provide links I will try to catch up. But further, is it your position that every speciation event (under some non-tautological definition of species) is the result of intelligent intervention? That is how I understand your Explanation 2, above. Nakashima
Joseph,
The definition of “species” is ambiguous. IOW the two animals could be more closely related than we think.
true enough, but if the egg had as much control over development as you claim, you would expect the cloned gaur to have at least some cow-like features. but it did not.
Are sharks fish? And do shark fins have skeletal bones?
yes and yes. cartilaginous bones, but bones nonetheless. it's irrelevant anyway bc no one is suggesting that tiktaalik arose from shark-like ancestors. your claim of hand bones poofing into tiktaalik from nowhere is completely mistaken. Khan
Adel: You are definitely more provoking when you give some argument or quote literature, but I can easily accept your more "emotional" expressions in the name of our old friendship :-) It's rater simple: the daily progress of biology is the greatest friend of ID because darwinism is based on lack of knowledge and on approximation, while research can give us the details that we need to understand and prove the simple concept that biological information can only be understood as the product of design. It's darwinism which thrives in the lack of understanding, because its "just so stories" have no explanatory power, and could stand no quantitative analysis. The poor and imaginary power of random variation and NS is only a myth, and like all myths it will not stand long in the face of facts. And I am afraid you are a little mistaken about gaps: ID does not need any gap. ID only needs that what is designed be recognized as designed, and that all the false explanations given up to now to deny that simple fact be recognized for what they are: false. In the design process there is no gap: the conception of the designer is implemented through a continuity of thought and information. In darwinian theory, there is no continuity of process, other than in the deluded hopes of darwinists: that's why darwinian theory is full of gaps, while ID isn't. o be more clear: fact1: a species is present fact2: after some time, a new species arises, more or less similar Explanation 1: RV+NS transform species 1 into species 2: as we cannot find any reasonable model and quantitative verification of that model, explanation 1 is full of gaps. You will say that those gaps are due to our poor knowledge of the details. I don't agree. I say that they are due to the poor quality of the explanatory model. Research will show who is right. Explanation 2: species 1 is transformed into secies 2 because a designer inputs the necessary information to implement the change. You can say that there are gaps in this theory, but that's not true. There are, certainly, many things we don't know: who is the designer, how does he implement the information, and so on. But those are not gaps. They are only things to research. Because we "know" that a designer can implemen information and obtain those results. We do that daily, even in biology. So, the model is consistent (which does not mean it is true, but it's a good start). gpuccio
Nakashima: I suppose we had been doing a lot of quantitative argumentations about FSCI in proteins. Maybe you have not read them. On the contrary, I am still waiting for a quantitative model of macroevolution... gpuccio
gpuccio:
As you know, I strongly believe that the daily progress of biology is the greatest friend of ID.
What could be the basis of this strange belief, which other ID supporters here have expressed? I hypothesize that it is based on the fundamental premise of ID: where there are gaps in evidence, an Intelligent Designer must have intervened. Therefore, regular science assists ID pseudoscience by energetically creating more gaps! (Just as each transitional fossil discovery generates two more missing links.) Happy to be of service! On further reflection, I think it is unfair to characterize ID as pseudoscience, since real pseudosciences have historically had research programs, involving testable prediction and experimentation. It will be interesting to watch whether ID ever rises to the level of Astrology or Alchemy. Adel DiBagno
Mr Gpuccio, You see, we in ID prefer to discuss things where we can go into details and be qunatitative, rather than give “just so stories”, even if about issues which would certainly be in favour of our position. "Can" is not "do". The last detailed and quantitative discussion on this site that I recall was about the performance metrics of a toy program written in 1986. The discussion didn't go very well for the design detection perspective, if I recall. What was the previous detailed and quantitative discussion before that? I suppose we shouldn't include any discussion of microevolution, population genetics, or common descent, since those are not areas of controversy. We should probably keep all software simulations off the table also, since Mr Scheesman informs me that they are irrelevant (pace Dr Dembski, Mr Kairosfocus, etc.). Nakashima
Jerry, I read the book- evo in 4 dimensions. Good read and good stuff but to me the authors worshipped natural selection just a little too much. And yes real good stuff on epigenetics... Joseph
The Common carp or European carp (Cyprinus carpio) is a widespread freshwater fish most closely related to the common goldfish (Carassius auratus), with which it is capable of interbreeding.
Joseph
gpuccio,
You are an inexhaustible source of indications!
On the contrary, I am easily exhausted. But I think that a useful response to arguments from ignorance is to present evidence that may alleviate ignorance. Please resist the urge to randomly insult people. -- Admin Adel DiBagno
I think the answers to a lot of these questions reside in the relation of structure to function, both in proteins and now, as epigenetics is finding, DNA. DNA sequences are one-dimensional. The structures actually involved are three dimensional. As gpuccio noted, time is also involved as these structures degrade and are replaced quickly. It could be considered four dimensional in that regard. tragic mishap
Nakashima: I am not sure of what you exactly mean by "developmental programs", but if your point is that the nature of regulations and the development of body plans and the general coordination of multicellular organisms are a stronger argument for ID than mere protein sequence complexity, I may agree with you. But the problem is exactly that: those are levels of complexity about which we really do not understand much (unless Adel's issue of sciencemag may shed new light on them...). That's really the only reason why in ID (and certainly in my posts) you will find so many arguments about protein complexity and the related probability resources, and much less, say, about the general rfegulation of transcriptomes in multicellular beings. You see, we in ID prefer to discuss things where we can go into details and be qunatitative, rather than give "just so stories", even if about issues which would certainly be in favour of our position. We are not darwinists, after all... :-) gpuccio
Adel: You are an inexhaustible source of indications! :-) OK, I will try to read as much as I can. As you know, I strongly believe that the daily progress of biology is the greatest friend of ID. And, beyond that, I am really curious, which is always the best motivation... gpuccio
Ms Womanatwell, It would be even more amazing if a different system than concentrations of molecules were used. Bacterial signaling is based on concentrations. I do not know if it is possible to trace a homolgy in this case. Nakashima
How do discoveries such as this affect the standard argument re design in natural environments? Is there an implication that developmental programs are too intricate to have developed without a designer? Since this would be a more specific claim than the general ID hypothesis, it would also be stronger. Nakashima
Correction: That was Ingolia et al.: http://www.sciencemag.org/cgi/content/abstract/324/5924/218 Adel DiBagno
Hi, gpuccio, I'm here, and your post indicates that you are in fine fettle.
Where is the plan? Where are the details? Where are the procedures?
I have been catching up on my reading in Science magazine, to which I have a subscription. The details that are being developed are staggering. I don't have the intellectual capacity or the energy to do more than direct you to the Website: http://www.sciencemag.org/ Take a look at the 10 April issue, for example, especially the paper by Ignolia et al. And, of course, there are scores of other front-line scientific journals plowing the same ground. I you are really interested in finding answers, you must read the literature. Adel DiBagno
Interesting thread... I would be glad to admit that both genetic and epigenetic information is contributing to the final results, and maybe trans species cloning experiments (which are probably still too few) can clarify some of the influences. But the problem is IMO different. The problem is: where is the information, and how is it stored? I will be more clear. There is a part of the information which we know pretty well (but probably not as well as we think). That is the part in the protein coding genes. Well, we do not really understand everything. The very correct citations about alternative splicing, exon shuffling, and, I could add, post translational modifications and many other possible steps, clearly show that the genesis of the proteome is much more complex than a simple transcription and translation of a few nucleotide sequences. But that's not really the point. The point is that, for protein coding genes, we have at least some ideas of how the information in stored. We have the genetic code, a symbolic code which allows us to read the information, to understand at least in part its structure and function (building the right proteins). But what about the rest? Are we still believing that the only problem is having the right proteins? No, that is only a very "small" part of the problem (small in comparison with the rest, I mean). No. The true problem is: which proteins do we need, and when, and in what quantity? That is a dynamic problem. It changes at each moment and state. And, above all, it changes for each cell type, in multicellular beings. IOW, that is a problem of control. In a software, that would grossly correspond to the procedures. What is the program doing, and when? That is the real problem. In biological information, be it in DNA or in epigenetic structures, we have no idea of where and how the procedures are stored. It may be interesting to know if and how much they are in the nucleus, in the dNA, in the cytoplasm, or who knows where. But the true problem is: what kind of information is that? How is it written? How is it stored? So, I agree with magnan: nobody knows where the information is coded. But nobody know, how it is coded, too. Let's take non coding DNA. There is little doubt that some of the regulation information is there, but the good question is: how? Is it in introns? And is so, what kind of information is it? Why are so many and so long introns necessary? Why are genes so fragmented? Probably, to achieve different levels of control. But again, what controls? Where is the controlling scheme? And so for macroscopic schemes, too. We can agree that many of human functions are possible because of the ordered connections of our 10^11 neurons in 10^14 synapses. Well, I suppose that is a rather complex scheme of connections. I don't believe it can be realized without written code. We have 3x10^9 nucleotides in our genome. A long sequence, but is it long enough? And that, including everything. Protein coding DNA is only 1.5% of that. Nucleotides are grouped in codons in the genetic code. But that is only to code for proteins. But how does it work for all the rest? How are individual transcriptomes implemented, controlled, corrected? How is macroscopic form achieved? How is intercellular regulation regulated? If transcription factors control the transcriptome, what controls the sub-transcriptome of transcription factors? (Adel, are you there?) Where is the plan? Where are the details? Where are the procedures? As Sermonti says, why is a fly a fly? gpuccio
What amazes me in embryonic development is that genetic receptors are sensitive to the CONCENTRATIONS of molecules which diffuse from the mother cells (as studied in Drosophila) to the embryo's DNA. This determines the differential development of head, thorax and tail. See: http://www.princeton.edu/~wbialek/our_papers/gregor+al_cell07b.pdf womanatwell
There is also a phenomena called overlapping gene instructions. Namely the same DNA sequence will code for more than one protein. Here is a link http://genome.cshlp.org/content/14/2/280.full.pdf There ie an example from Don Johnson's book on evolution. Here it is ATGTGTGATGCTACCCCCTAGTCCAAAAGGGCACCTTG Note the start codon, ATG, appears three times and each can represent an entirely different protein. So this is one way that a sequence can produce multiple proteins. And because this frequently represents a frame shift the amino acids can be quite different. jerry
Development is still largely a mystery. I am sure that they know a lot more than they did 10 years ago and each period brings new knowledge. But how it all unfolds is still basically a mystery. Also development takes place long after birth as we all know about growth and changes that take place in different species after birth. Just look at your children and if you do not have any, look at what has happened to others over time. And then it all winds down and at different rates per species. Again a mystery. You would think that longevity would be a trait that would be number one for natural selection but it seems to have eluded it. If one wants to read an interesting but often convoluted book on epigenesis and other interesting topics, get Jablonka and Lamb's Evolution in Four Dimensions: Genetic, Epigenetic, Behavioral, and Symbolic Variation in the History of Life. These are far from pro ID people but there is nothing that undermines ID in the book. Also at one point they make the observation that there is no evidence for the origin of any species. jerry
Basically, DNA in the cell has an incredibly complex shape. It's folded many different times and many different ways. http://www.youtube.com/watch?v=5UoKYGKxxMI I assume that epigenetics has something to do with how chromosomes are packed in the cell. Ultimately though, it must depend on DNA somewhere, since you can't just carry around proteins from your mother's egg for the rest of your life. tragic mishap
Joseph:
Had they tried that with an egg of a TOTALLY different species the outcome would have either resembled the egg species or would not have developed.
This outcome is consistent with the research finding here:
Cytoplasmic Impact on Cross-Genus Cloned Fish Derived from Transgenic Common Carp (Cyprinus carpio) Nuclei and Goldfish (Carassius auratus) Enucleated Eggs1 Abstract In previous studies of nuclear transplantation, most cloned animals were obtained by intraspecies nuclear transfer and are phenotypically identical to their nuclear donors; furthermore, there was no further report on successful fish cloning since the report of cloned zebrafish. Here we report the production of seven cross-genus cloned fish by transferring nuclei from transgenic common carp into enucleated eggs of goldfish. Nuclear genomes of the cloned fish were exclusively derived from the nuclear donor species, common carp, whereas the mitochondrial DNA from the donor carp gradually disappeared during the development of nuclear transfer (NT) embryos. The somite development process and somite number of nuclear transplants were consistent with the recipient species, goldfish, rather than the nuclear donor species, common carp. This resulted in a long-lasting effect on the vertebral numbers of the cloned fish, which belonged to the range of goldfish. These demonstrate that fish egg cytoplasm not only can support the development driven by transplanted nuclei from a distantly related species at the genus scale but also can modulate development of the nuclear transplants.
http://www.bioone.org/doi/abs/10.1095/biolreprod.104.031302 steveO
Sorry: "post-translational" In fact it's one of the same modifications, simple methylation. tragic mishap
bFast, I think Wikipedia is a bit behind on this one. http://www.sciencedaily.com/releases/2009/04/090401181447.htm From the article: “an epigenetic trait is a stably inherited phenotype resulting from changes in a chromosome without alterations in the DNA sequence.” So if the fifth and sixth nucleotides alter the shape of the chromosome, then they would technically be epigenetic traits, since they are basically modified cytosine and don't change the base sequence. Which means that they aren't really a "fifth" and "sixth" base, just a modified base. It's just like post-transcriptional modifications to proteins. tragic mishap
Alternative gene splicing (AGS)- More evidence for ID. AGS is a process that can take one gene and make several different proteins from it by editing. This editing not only takes out the unneeded introns and splices the exons together. It can also rearrange the exons or edit specified exons out to get other products. The 1 gene = 1 protein position has been abandoned years ago... Joseph
magnan:
But presumably the cytoskeleton and ribosomes and other cellular apparatus (and their extra-nuclear developmental coding information) are constructed from plans ultimately in the nuclear DNA.
I doubt that. The CELL is complete. The CELL is what replicates, along with ALL of its structures. There isn't any data which would demonstrate tat DNA alone can account for everything in a cell. THAT is the main issue with abiogenesis. DNA needs everything in the cell before it can be replicated. DNA is NOT a replicator.
The bottom line is nobody knows where the information is coded.
True, but I have a pretty good idea.
There doesn’t seem to be enough conserved DNA in the nucleus.
DNA is just a medium just as a disk is a medium for carrying computer info. The sequence is only important to carry out the coded instructions per the prescribed genetic code. Joseph
bFat:
The “disk” is the nucleotides. The information is the assembled order of those nucleotides.
There isn't any evidence for that. I understand that is what the anti-ID position requires but it does not hold water when compared with the data. Joseph
Khan, The definition of "species" is ambiguous. IOW the two animals could be more closely related than we think. Perhaps I shopuld have been more clear. Had they tried that with an egg of a TOTALLY different species the outcome would have either resembled the egg species or would not have developed. Are sharks fish? And do shark fins have skeletal bones? Joseph
11 - sparc:
That said, I don’t see a fifth and sixth nucleotide type to be epigenic. Nor do I see that a fifth or sixth nucleotide offers much more opportunity for information increase.
Actually 5-Methylcytosine is well known as the 5th base. And yes, it adds information (X-inactivation, parental imprinting).
I don't want to speak for bFast, so he will need to comment. But my interpretation is that having additional bases does not affect the disability of random mutation to generate new information, not that the additional bases can't be contain information like the others. If genes are really like a "cube" of information, where characters make up specified information horizontally, vertically, "into the paper", diagonally (in 3 dimensions), and even in reverse, I am completely unable to comprehend the belief that random mutation could ever arrive at that result. I thought it was far-fetched when I was taught each gene had one specific function. uoflcard
A doughnut to go with that coffee: http://en.wikipedia.org/wiki/Alternative_splicing Adel DiBagno
bfast [10]:
What have exons got to do with it? What is your calculation 4 exons = 15 different proteins. Basic genetic theory says that a gene codes for a protein, no matter how many exons it contains.
Wake up and smell the coffee: http://en.wikipedia.org/wiki/Exon_shuffling Adel DiBagno
That said, I don’t see a fifth and sixth nucleotide type to be epigenic. Nor do I see that a fifth or sixth nucleotide offers much more opportunity for information increase.
Actually 5-Methylcytosine is well known as the 5th base. And yes, it adds information (X-inactivation, parental imprinting).
I say that because according to Jonathan Wells if we take the DNA of one species and put it in to an egg of another, is anything develops it will resemble the egg’s species.
Hasn't that been sorted out more than 50 years ago? You may google Acetabularia mediterranea, Acetabularia wettsteinii and Hämmerling. sparc
Nakashima:
If on average, genes were broken up into 4 exons, then they could code for 15 different proteins.
What have exons got to do with it? What is your calculation 4 exons = 15 different proteins. Basic genetic theory says that a gene codes for a protein, no matter how many exons it contains. Joseph:
I hold that the information in living organisms is very similar to the information “on” a computer disk- the disk is not the information.
The "disk" is the nucleotides. The information is the assembled order of those nucleotides. Kahn, point well made. It would seem that the fundimental difference between a cow and a gaur is held in the dna. I'd love to see a mouse egg implanted with a lizzard's genes. If such worked, it would suggest that there is very little evolution going on in the exonic material. bFast
Of note:Evolution Is Not Even A Proper Scientific Theory - The Crushing Critique Against Genetic Reductionism - Dr. Arthur Jones - http://www.tangle.com/view_video.php?viewkey=26e0ee51239e23041484 If you were to write a (very large) book similar to the DNA code, you could read many parts of the book normally and it would have one meaning, you could read the same parts of the book backwards and it would have another completely understandable meaning. Yet then again, a third equally coherent meaning would be found by reading every other letter of the same parts. A fourth level of meaning could be found by using a simple encryption program to get yet another meaning. A fifth and sixth level of meaning could be found in the way you folded the parts of the book into specific two and three dimensional shapes. Please bear in mind, this is just the very beginning of the mind bending complexity scientists are finding in the DNA code. Indeed, a study by Trifonov in 1989 has shown that probably all DNA sequences in the genome encrypt for up to 12 different codes of encryption!! No sentence, paragraph, book or computer program man has ever written comes close to that staggering level of poly-functional encryption we find in the DNA code of man. Here is a quote on the poly-functional nature of the DNA from renowned Cornell Geneticist and inventor Dr. John Sanford from his landmark book, “Genetic Entropy”: There is abundant evidence that most DNA sequences are poly-functional, and therefore are poly-constrained. This fact has been extensively demonstrated by Trifonov (1989). For example, most human coding sequences encode for two different RNAs, read in opposite directions i.e. Both DNA strands are transcribed ( Yelin et al., 2003). Some sequences encode for different proteins depending on where translation is initiated and where the reading frame begins (i.e. read-through proteins). Some sequences encode for different proteins based upon alternate mRNA splicing. Some sequences serve simultaneously for protein-encoding and also serve as internal transcriptional promoters. Some sequences encode for both a protein coding, and a protein-binding region. Alu elements and origins-of-replication can be found within functional promoters and within exons. Basically all DNA sequences are constrained by isochore requirements (regional GC content), “word” content (species-specific profiles of di-, tri-, and tetra-nucleotide frequencies), and nucleosome binding sites (i.e. All DNA must condense). Selective condensation is clearly implicated in gene regulation, and selective nucleosome binding is controlled by specific DNA sequence patterns - which must permeate the entire genome. Lastly, probably all sequences do what they do, even as they also affect general spacing and DNA-folding/architecture - which is clearly sequence dependent. To explain the incredible amount of information which must somehow be packed into the genome (given that extreme complexity of life), we really have to assume that there are even higher levels of organization and information encrypted within the genome. For example, there is another whole level of organization at the epigenetic level (Gibbs 2003). There also appears to be extensive sequence dependent three-dimensional organization within chromosomes and the whole nucleus (Manuelides, 1990; Gardiner, 1995; Flam, 1994). Trifonov (1989), has shown that probably all DNA sequences in the genome encrypt multiple “codes” (up to 12 codes). bornagain77
Joseph: "IOW the information rides on the DNA, RNA, ribosomes, cytoskeleton and other structures in each cell." This seems to be the case. But presumably the cytoskeleton and ribosomes and other cellular apparatus (and their extra-nuclear developmental coding information) are constructed from plans ultimately in the nuclear DNA. If this is the case, then why is much of the non coding (formerly called "junk") DNA not conserved? This would seem to imply that the cellular information bearing structures carrying developmental data are not in the nucleus OR in the cytoskeleton, ribosomes, etc. By this reasoning much of the information to actually construct complex metazoans is neither in the nuclear DNA, or in the known molecular machines and structures unless they replicate from coding information carried outside the nuclear DNA. But the cloning experiments mentioned by Khan imply otherwise. At each step in the development of the organism each cell apparently knows which genes to transcribe, how and how much, in accordance with the body plan. The total "transcriptome" search space is beyond huge. Most Darwinists seem to ignore the issue. If they attempt to address the issue, one favorite conjecture is that "the primary source of information determining what proteins (and therefore what traits) a cell will produce is the location (at that point in development) of the cell, especially the cells it is touching and the cells that are nearby" (Allen MacNeill). Of course this actually begs the question, since the information ultimately still has to be somehow encoded in a single cell. The bottom line is nobody knows where the information is coded. There doesn't seem to be enough conserved DNA in the nucleus. magnan
Joseph,
I say that because according to Jonathan Wells if we take the DNA of one species and put it in to an egg of another, is anything develops it will resemble the egg’s species
You'll have to get a better source. if you transfer the dna from species 1 to a denucleated egg of species 2, then species 1 will develop. this is how they cloned a gaur with a cow surrogate mother: http://www.advancedcell.com/press-release/advanced-cell-technology-inc-announced-that-the-first-cloned-endangered-animal-was-born-at-730-pm-on-monday-january-8-2001 Did Jonathan Wells also say that fish fins have no bones? Khan
The egg seems to carry more information about the form than the DNA. I say that because according to Jonathan Wells if we take the DNA of one species and put it in to an egg of another, is anything develops it will resemble the egg's species. As for epigenetics- just look at our body! The cells have the same DNA yet can be very different. Also I do not believe the information is the sequence. I hold that the information in living organisms is very similar to the information "on" a computer disk- the disk is not the information. IOW the information rides on the DNA, RNA, ribosomes, cytoskeleton and other structures in each cell. And just like a computer disk you will not see it through a microscope.
“Yet by the late 1980s it was becoming obvious to most genetic researchers, including myself, since my own main research interest in the ‘80s and ‘90s was human genetics, that the heroic effort to find the information specifying life’s order in the genes had failed. There was no longer the slightest justification for believing that there exists anything in the genome remotely resembling a program capable of specifying in detail all the complex order of the phenotype. The emerging picture made it increasingly difficult to see genes in Weismann’s “unambiguous bearers of information” or to view them as the sole source of the durability and stability of organic form. It is true that genes influence every aspect of development, but influencing something is not the same as determining it. Only a very small fraction of all known genes, such as developmental fate switching genes, can be imputed to have any sort of directing or controlling influence on form generation. From being “isolated directors” of a one-way game of life, genes are now considered to be interactive players in a dynamic two-way dance of almost unfathomable complexity, as described by Keller in The Century of The Gene.” Michael John Denton page 172 of Uncommon Dissent
Joseph
Mr bFast, If on average, genes were broken up into 4 exons, then they could code for 15 different proteins. So the ratio you quote can be explained by a lower average number of introns than 3. On the other hand, these epigenetic discoveries address when and where gene expression occurs. Nakashima
Patrick, I read the first sentence of your post and wondered why someone would compress you. Barry Arrington
tragic mishap:
It takes more than genes. Epigenetics is still about DNA, just not about gene sequences.
Hmmm. From Wikipedia:
In biology, the term epigenetics refers to changes in phenotype (appearance) or gene expression caused by mechanisms other than changes in the underlying DNA sequence, ... instead, non-genetic factors cause the organism's genes to behave (or "express themselves") differently.
Ie, if you take all of the dna of a horse, and clone it into the cell of a rat, the non-DNA stuff of the cell is likely to cause you to get something quite different from a horse. That said, I don't see a fifth and sixth nucleotide type to be epigenic. Nor do I see that a fifth or sixth nucleotide offers much more opportunity for information increase. However, the non-DNA structure of cells may contain vast quantities of information. That said, the coder of DNA is much more brilliant than us software developers. There are about 20,000 genes in the average human, but about 100,000 different proteins. The kind of code overloading that goes on in dna is incredible. bFast
It takes more than genes. Epigenetics is still about DNA, just not about gene sequences. tragic mishap
Very interesting! It seems that the 'it-takes-more-than-dna' approach is gaining momentum. I wonder what effects this has on the '98% similarity' argument. QuadFather

Leave a Reply