Uncommon Descent Serving The Intelligent Design Community

Why describing DNA as “software” doesn’t really work

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
File:DNA simple.svg

Check out Science Uprising 3. In contemporary culture, we are asked to believe – in an impressive break with observed reality – that the code of life wrote itself:

… mainstream studies are funded, some perhaps with tax money, on why so many people don’t “believe in” evolution (as the creation story of materialism). The fact that their doubt is treated as a puzzling public problem should apprise any thoughtful person as to the level of credulity contemporary culture demands in this matter.

So we are left with a dilemma: The film argues that there is a mind underlying the universe. If there is no such mind, there must at least be something that can do everything that a cosmic mind could do to bring the universe and life into existence. And that entity cannot, logically, simply be one of the many features of the universe.

Yet, surprisingly, one doesn’t hear much about mainstream studies that investigate why anyone would believe an account of the history of life that is so obviously untrue to reason and evidence.Denyse O’Leary, “There is a glitch in the description of DNA as software” at Mind Matters News

Maybe a little uprising wouldn’t hurt.

Here at UD News, we didn’t realize that anyone else had a sense of the ridiculous. Maybe the kids do?

See also: Episode One: Reality: Real vs. material

and

Episode Two: No, You’re Not Robot made of Meat

Notes on previous episodes

Seven minutes to goosebumps (Robert J. Marks) A new short film series takes on materialism in science, including that of AI’s pop prophets

Science Uprising: Stop ignoring evidence for the existence of the human mind Materialism enables irrational ideas about ourselves to compete with rational ones on an equal basis. It won’t work (Denyse O’Leary)

and

Does vivid imagination help “explain” consciousness? A popular science magazine struggles to make the case. (Denyse O’Leary)

Further reading on DNA as a code: Could DNA be hacked, like software? It’s already been done. As a language, DNA can carry malicious messages

and

How a computer programmer looks at DNA And finds it to be “amazing” code

Follow UD News at Twitter!

Comments
Epigenetic regulation of glycosylation is the quantum mechanics of biology Gordan Laucab, Vlatka Zoldošc DOI: 10.1016/j.bbagen.2013.08.017 Biochimica et Biophysica Acta (BBA) Volume 1840, Issue 1, Pages 65-70 Highlights The majority of proteins are glycosylated. Glycan parts of proteins perform numerous structural and functional roles There are no genetic templates for glycans, instead glycans are defined by dynamic interaction between genes and environment. Epigenetic changes enable adaptation to variations in environment. Epigenetic regulation of glyco—genes is a powerful evolutionary tool. Abstract Background Most proteins are glycosylated, with glycans being integral structural and functional components of a glycoprotein. In contrast to polypeptides, which are fully encoded by the corresponding gene, glycans result from a dynamic interaction between the environment and a network of hundreds of genes. Scope of review Recent developments in glycomics, genomics and epigenomics are discussed in the context of an evolutionary advantage for higher eukaryotes over microorganisms, conferred by the complexity and adaptability which glycosylation adds to their proteome. Major conclusions Inter-individual variation of glycome composition in human population is large; glycome composition is affected by both genes and environment; epigenetic regulation of “glyco-genes” has been demonstrated; and several mechanisms for transgenerational inheritance of epigenetic marks have been documented. General significance Epigenetic recording of acquired characteristics and their transgenerational inheritance could be important mechanisms used by higher organisms to compete or collaborate with microorganisms.OLV
July 4, 2019
July
07
Jul
4
04
2019
05:02 PM
5
05
02
PM
PDT
Earlier @ 118 I asked: “I can spot at least half a dozen big flaws-- actually major blunders-- in [Sagan’s] argument. Does anyone else see them?” https://uncommondescent.com/intelligent-design/why-describing-dna-as-software-doesnt-really-work/#comment-679457 Gpuccio @123 pointed out a couple of problems. For example:
“It is false that only the active site is important for the protein function. The whole structure is very important, and it depends on most of the AA positions. The active site, certainly, has a very specific role, but it is only part of the story.”
Earlier @ 112 I pointed out a couple of other problems the Sagan doesn’t even mention.
Actually I believe that the probability for a 100 aa protein forming by chance would be 10^30th + 10^30th + 10^130th = 10^190th, according to Meyer, or 10^125th, according to Sauer.
https://uncommondescent.com/intelligent-design/why-describing-dna-as-software-doesnt-really-work/#comment-679203 Even if we grant, for sake of argument, Sagan’s claim that “it’s not a hundred places you need to explain, it’s only five to get going… And 20^5 is an absurdly small number, only about five million. Those experiments are done in one ocean between now and next Tuesday.” However, Sagan completely ignores (1) the chirality problem and (2)the problem of creating the right chemical bond-- a peptide bond. In the quote I provided above at 112 Meyer gives a very succinct explanation as to why:
The probability of building a chain of 100 amino acids in which all linkages involve peptide linkages is (1/2)100 or roughly 1 chance in 10^30th. Second, in nature every amino acid has a distinct mirror image of itself, one left-handed version or L-form and one right-handed version or D-form. These mirror-image forms are called optical isomers. Functioning proteins tolerate only left-handed amino acids, yet the right-handed and left-handed isomers occurs in nature with roughly equal frequency. Taking this into consideration compounds the improbability of attaining a biologically functioning protein. The probability of attaining at random only L-amino acids in a hypothetical peptide chain 100 amino acids long is again (1/2)100 or roughly 1 chance in 10^30th. The probability of building a 100 amino acid length chain at random in which all bonds are peptide bonds and all amino acids are L-form would be (1/4)100 or roughly 1 chance in 10^60th…
Again, Sagan completely ignores these two problems, even though they were well known by OoL researchers at the time. Indeed, as a layman who had an interest in the subject at the time (1985) I knew about them. Why didn’t astronomer/astrobiologist Dr. Sagan know? In other words, the probability of forming a single 100aa protein by chance even if we accept his 20^5 has to roughly (converting to base 10) be about be 10^66. However, that creates another problem that Sagan completely ignores. A single protein floating alone in an ocean won’t evolve into anything, even if we assume Darwinian evolution because protein does not self-replicate. So even a universe of oceans full of amino acids and proteins wouldn’t get you anywhere. This is why the majority of OoL researchers have moved on to the RNA world hypothesis, but that has a whole set of problems of its own. However, I am sure that Sagan’s audience was so enamored by his scientific “rock star” status that they gave him a complete pass on a subject that the average person knows very little about. Can anyone else see any other problems with Sagan’s argument?john_a_designer
June 29, 2019
June
06
Jun
29
29
2019
07:19 AM
7
07
19
AM
PDT
GP, good stuff, of course log_2 20 = 4.32, i.e. 4.32 bits/character. KFkairosfocus
June 28, 2019
June
06
Jun
28
28
2019
05:33 AM
5
05
33
AM
PDT
gpuccio, Thanks very much, this is helpful. I was looking for a "BLAST for Dummies" type tutorial, but even those assume background that I don't have. Anyway, the way you have broken it down clears up a lot of questions I had.daveS
June 27, 2019
June
06
Jun
27
27
2019
06:24 AM
6
06
24
AM
PDT
DaveS (and all interested) (continued): IOWs, the Blast algorithm usually computes about 2 bits for each identity. Considering that we are dealing with logarithmic values, that is an extreme underestimation. But the Blast tool is easy to use, and universally used in biology. So I stick to its result, even if certainly too conservative for my purposes. Of course that is a lot of compensation for other possible factors (of course some of the identities could be a random effect, and there could be some redundancy in the functional information, and so on). Even considering all those aspects, one single protein like the beta subunit of ATP synthase is more than enough to infer design. And, as said, there are thousands of examples like that.gpuccio
June 27, 2019
June
06
Jun
27
27
2019
03:26 AM
3
03
26
AM
PDT
DaveS (and all interested): A couple of important points: 1) It is very important that we consider long evolutionary separations. If we compare the same protein in humans and chimps, it will be almost identical in most cases. But the meaning here is different. The evolutionary separation between human and chimps is rather short. Therefore, neutral variation operated only for a very short time, and neutral sequences can be almost the same in the two groups just because there was not enough time to change. IOWs, the homology could be simply a passive result. 2) Identities are not the whole story. We must also consider similarities, IOWs AAs that are substituted by very similar ones. Now, let's see how the BLAST algorithm works. Again, let's consider the homology between human ATP synthase subunit beta and the same protein in E. coli. Proteins P06576 and P0ABB4. The length is similar, but not identical (529 vs 460). Comparing the two sequences, we find: Identities: 334 Positives: 382 (that includes the identities) Score: 660 bits Expect: 0.0 IOWs, the algorithm has performed an empirical alignment between the two sequences, and found 334 identities and almost 50 similarities. The algorithm computes a bitscore of 660 and an E value of practically 0 (that is more or less a p value related to the null hypothesis that the two sequences are evolutionarily unrelated, and that the observed similarities are due to chance, given the number of comparisons performed and the number of sequences in the present protein database). Now, I will try to show why the Blast algorithm is very conservative, when used to evaluate functional information. If we reason according to the full potential of functional information, and just stick to identities, 334 identities would correspond to: 334 x 4.3 = 1436 bits Indeed, the raw bitscore is given by the Blast algorithm as 1703 bits. But the algorithm works in a way that the final, corrected value is "only" 660 bits. IOWs, the Blast algorithm usually computesgpuccio
June 27, 2019
June
06
Jun
27
27
2019
03:21 AM
3
03
21
AM
PDT
DaveS (and all interested): The concepts I have summarized in my previous post are the foundation for using homology to detect functional information. Of course the idea is not mine. Durston, in particular, has applied it brilliantly in his important paper. However, I have developed a personal approach and methodology which is slightly different, even if inspired by the same principles. So. let's imagine that we have two sequences that are 100% identical, and that are found in two species separated by more than 400 million years of evolutionary time. For example, humans and cartilaginous fishes, which is the scenario I have analyzed many times here. That is a very good scenario for our purposes, because the evolutionary split between the two groups is uspposed to be more than 400 million years old. So, if we compare a human protein with the same protein in sharks, we have two proteins separated by more than 400 million years of time (humans, of course, are derived from bony fishes). So, let's suppose that the same protein, with the same structure and function, has identical AA sequence in the two groups. That is not true for any protein I know of, but it is useful for our reasoning. So, let's say that we have a 150 AA protein, with a well known important function, and that our protein has exactly the same AA sequence in the two groups: humans and sharks. Will the two protein coding genes be the same? Of course not. The third nucleotide will be different in most sites, because of neutral variation. In most cases, it could change without affecting the sequence of AAs, and in 400+ million years those sited did really change. But, as said, let's suppose the AA sequence is exactly the same. What does that mean? As explained, it means that we can confidently assume that all those 150 AAs must be what they are, for the protein to be really functional. Or, at least, most of them. As said, that is true of no real protein. But if it were true, what does it mean? It simply means that the target space is 1. So, in that case, the computation is easy. Target space = 1 Search space = 20^150 = 2^648 Target space / Search space ratio = 2^ -648 Functional information = 648 bits. Of course, this is the highest information possible for a 150 AA sequence. In this extreme case, the functional information is the same as the total potential information in the sequence. So, we can easily see that, in this very special, and unrealistic, case, each AA identity corresponds to about 4.3 bits of functional information. More in next post.gpuccio
June 27, 2019
June
06
Jun
27
27
2019
02:28 AM
2
02
28
AM
PDT
DaveS at #125 and 126: OK, but maybe I can help a little with your chewing! :) The logical connection between functional information and homology conservation is probably not completely intuitive, so I will try to give some input about that point. First of all, we must consider that the information in the genome, be it functional or not, is subject to what is called neutral variation. IOWs, errors in DNA duplication will affect the sequence of nucleotides in time. The process is slow, but evolutionary times are big. So, in principle, because of Random Variation each sequence in the genome of living beings would lose any connection with the original form, given enough time. However, luckily that is true only for neutral or quasi neutral variation. IOWs, for sequences that have no function. If the sequence is functional, and if the function is relevant enough (IOWs, if it can be seen by Natural Selection), what happens is that change is not allowed beyond some level: if the sequence changes enough, so that its function is lost or severely impaired, that variation is elminated by what is called negative selection, or purifying selection. Now, while positive selection is elusive and scarcely detectable in most cases, negative selection is really a powerful and ubiquitous force of nature. It is the reason why proteins retain much of their sequence specificity through hundreds or thousands of million yeras, in spite of random variation. All those things are well known, and can be easily proved. Neutral variation is well detectable in non functional, or weakly functional, sites, for example in the third nucleotide in protein coding genes, which usually can change without affecting the protein sequence. The concept of "saturation" is also important: it is the time necessary to erase any similarity between two neutral (non functional) sequences, because of RV. While that time can vary in different cases, in general an evolutionary split of 200 - 400 million years will be enough to cancel any detectable or significant homology between two neutral, non functional sequences in the genome. More in next post.gpuccio
June 27, 2019
June
06
Jun
27
27
2019
02:04 AM
2
02
04
AM
PDT
gpuccio, Please disregard the above post---I'll go back to chewing.daveS
June 26, 2019
June
06
Jun
26
26
2019
02:42 PM
2
02
42
PM
PDT
gpuccio, Two questions, if you please. In the calculation of functional information, we take -log_2 of the ratio of the number of functional structures to the total number of structures possible in a particular system. It's essentially -log_2 of a conditional probability P(E | F) where E and F are very precisely defined events. OTOH, these BLAST scores in bits are calculated (via various schemes, I take it) simply by comparing two sequences. I think you allude to this, but should it be clear that the BLAST numbers are lower bounds for the amount of functional information? In particular, for the E and F that Sagan is referring to? (I take it that E = some form of life arising via these proteins and F = a "primordial soup" exists).daveS
June 26, 2019
June
06
Jun
26
26
2019
11:00 AM
11
11
00
AM
PDT
Thanks, gpuccio. I'll have to chew on that (although I doubt my understanding will reach beyond the superficial).daveS
June 26, 2019
June
06
Jun
26
26
2019
09:35 AM
9
09
35
AM
PDT
John_a_designer at #118 and DaveS at #119: Of course not all AA positions in a protein sequence have the same functional specificity. That's why indirect methods based on homology, like Durston's and mine, help to evaluate the real functional information. Let's take for example a 154 AAs protein, human myoglobin. If all AAs had to be exactly what they are for the protein to be functional, the functional information in the protein would be: -log2(1:20^154) = about 665 bits. But of course that's not the case. We know that many AAs can be different, and others cannot. Moreover, some AA positions are almost indifferent to the function (they can be any of the 20 AAs), while others can only change into some similar AA. All that is well known. It is false that only the active site is important for the protein function. The whole structure is very important, and it depends on most of the AA positions. The active sire, certainly, has a very specific role, but it is only part of the story. So, how can we have an idea of how big functional information is in human myoglobin? My method is rather simple. If we blast the human protein against, for example, cartilaginous fishes, we get a best hit of 127 bits (Heterodontus portusjasksoni), and others very similar with other members of the group (123 bits for Callorhincus milii and 121 bits for Rhincodon typus). That means that about 120 bits of functional information have been conserved between cartilaginous fish and humans. That value is very conservative. It corresponds to about 65 identities and 87 positives (in the best hit), and is already heavily corrected for chance similarities. So, we can be rather safe if we take it as a measure of the real functional information: the true value will almost certainly be higher. The reason why conserved homology corresponds to function is very simple: cartilaginous fishes and humans are separated by more than 400 million years in evolutionary history. In that time window, any nucleotide sequence in the genome will be saturated by neutral variation, IOWs it will show no detectable homology in the two groups, unless it is preserved by negative, purifying selection, beacuse ot its functional role. So, we can see that myoglobin is after all not so functionally specific. As a 154 AA sequence, it has only at least 120 bits of functional complexity, which is not so much. Always a lot, however. That is not surprising, because the structure of the myoglobin molecule is rather simple: it is a globular protein, with one well defined active site. Not the most complex of the lot. Now, let's consider another protein that I have discussed many times: the beta subunit of ATP synthase. Again, let's consider the human form: a 529 AA long sequence, P06576. Now, as this is a very old protein, originating in bacteria, let's blast the human form against the same protein in E. coli, a well known prokaryote (P0ABB4, a 460 AA long sequence). The result is rather amazing: the two sequences show 660 bits of homology, after a separation of billions of years! We can have no doubts that those 660 bits are true functional information. However, as you can see, the functional information as evaluated by this method is always much less than the total possible information for a sequence that long. That's because of course many positions are not functionally specific, and also because the BLAST method is very conservative. Anyway, the beta subunit of ATP synthase, which is only a part of a much more complex molecule, is more than enough, with its (at least) 660 bits of functional information, to demonstrate biological design. And it's just one example, among thousands!gpuccio
June 26, 2019
June
06
Jun
26
26
2019
09:03 AM
9
09
03
AM
PDT
OLV: "It feels good to be on the winning side of the debate." Yes, even if almost everybody thinks the opposite, it's truth that counts! :) And thank you for your usual very interesting links and quotes.gpuccio
June 26, 2019
June
06
Jun
26
26
2019
08:18 AM
8
08
18
AM
PDT
KF at #114 and 155: Hi, always great to discuss with you! :) Of course, chance and necsssity are often mixed in systems. That's why we have to try to separate them in our evaluations. But, as you correctly say, that does not change the main point.gpuccio
June 26, 2019
June
06
Jun
26
26
2019
08:15 AM
8
08
15
AM
PDT
John_a_designer @118: “But who am I to question the great Carl Sagan?” You are a thinking person hence you may question anybody. You may not get any coherent answer back, but that’s not your problem.OLV
June 26, 2019
June
06
Jun
26
26
2019
07:22 AM
7
07
22
AM
PDT
JAD,
Nevertheless, I a mere layman, can spot at least half a dozen big flaws– actually major blunders– in his argument. Does anyone else see them?
I don't know thing 1 about this stuff, but I would appreciate an enumeration of these blunders at some point. Edit: And I don't doubt that Sagan made many errors. The discussion of the death of Hypatia in the original Cosmos tv series apparently is a well-known example.daveS
June 26, 2019
June
06
Jun
26
26
2019
07:17 AM
7
07
17
AM
PDT
Gpuccio @ 113,
I am not familiar with Sauer’s method, that you quote. I will try to give a look at it, it seems interesting.
Neither am I. I was quoting Meyer who was alluding to Sauer who apparently has an argument that proteins do not have to be so highly specified. Indeed, as I am sure you already know, there is some variability in the sequencing of well known proteins. For example, not all cytochrome-c is the same. Douglas Axe I know has done some work that at least from what I understand “probably puts an ax” to Sauer’s higher probability estimate. However, even if Sauer is right 10^125th, for a 100 aa protein does not bode well for any kind of naturalistic explantion. In his book, The Varieties of Scientific Experience: A Personal View of the Search for God,* Carl Sagan life also calculates the probability of “a modest” 100 aa long enzyme. “A way to think of it,” he writes, “is a kind of a necklace on which there are a hundred beads. There are twenty different kinds any one of which could be in any one of these positions. To reproduce the molecule precisely, you have to put all the right beads-- all the right amino acids-- in the molecule in the right order. If you were blindfolded,” Sagan then goes on to explain your chances of coming up with the right sequences by chance alone, he calculates, is about 10^130th or the same result as Steven Meyer. (By the way, Sagan gave these lectures in 1985 before there even was a modern ID movement.) He then adds that, “Ten to the hundred-thirtieth power, or 1 followed by130 zeros, is vastly more than total number of elementary particles in the entire universe, which is only [only?] about ten to the eightieth (10^80th).” (p.99-100) He then, like Dembski, factors in Planck time along with a universe full of planets with oceans like our own (not that that really helps any) and concludes (ta-dah!,) “You could never produce on enzyme molecule of predetermined structure.” Of course, ID’ists agree that there is not enough time or chance in the entire universe to form even one modest protein molecule. But not so fast. Sagan then tries to very deftly take back with his left hand what he has just put on the table with his right. (Haven’t we seen this act before?) “Now let’s take another look,” he writes on page 101. “Does is matter if I have a hemoglobin molecule here and I pull out this aspartic acid and put in a glutamic?” (Notice that in less than 2 pages he had gone from a protein of 100 aa’s to one with over 850 aa’s. I won’t explain why he does this-- well, actually I am really not sure why.) “Does that make the molecule function less well? In most cases it doesn’t. In most cases an enzyme has a so-called active site, which is generally about five amino acids long. And it’s the active site that does the stuff. And the rest of the molecule is involved in folding and turning the molecule on of turning it off. And it’s not a hundred places you need to explain, it’s only five to get going. And 20^5 is an absurdly small number, only about five million. Those experiments are done in one ocean between now and next Tuesday. Now, remember what we are trying to do: We’re not trying to make a human being from scratch… What we’re asking for is something that gets life going, so this enormously powerful sieve of Darwinian natural selection can start pulling out the natural experiments and encouraging them, and neglecting the cases that don’t work.” Does Sagan have a point here? Remember he gave the Gifford lectures right after his big success with Cosmos, so publicly he had achieved scientific “rock star” status. Nevertheless, I a mere layman, can spot at least half a dozen big flaws-- actually major blunders-- in his argument. Does anyone else see them? But who am I to question the great Carl Sagan? He was one of the pioneers of astrobiology and SETI and played a big role in helping to design the scientific instruments for NASA’s Viking Mars landers and Voyager interplanetary probes. Who am I with zero scientific credentials to my name to question such greatness? So am I mistaken in thinking that Sagan’s thinking is mistaken? Again, what do you see? *[According to Wikipedia: “The Varieties of Scientific Experience: A Personal View of the Search for God is a book collecting transcribed talks on the subject of natural theology that astronomer Carl Sagan delivered in 1985 at the University of Glasgow as part of the Gifford Lectures.[1] The book was first published posthumously in 2006, 10 years after his death. The title is a reference to The Varieties of Religious Experience by William James. The book was edited by Ann Druyan, who also provided an introduction section...”]john_a_designer
June 26, 2019
June
06
Jun
26
26
2019
07:08 AM
7
07
08
AM
PDT
GP @113: “I am really happy that I must not defend the neo-darwinian theory. Of course Intelligen Design is the only reasonable approach, it’s as simple as that.” :) It feels good to be on the winning side of the debate. The neo-Darwinian ideas are under attack from the third way folks too, who are not ID friendlyOLV
June 26, 2019
June
06
Jun
26
26
2019
05:06 AM
5
05
06
AM
PDT
DNA replication Differences in firing efficiency, chromatin, and transcription underlie the developmental plasticity of the Arabidopsis DNA replication origins Joana Sequeira-Mendes, Zaida Vergara, Ramon Peir1, Jordi Morata, Irene Aragüez, Celina Costas, Raul Mendez-Giraldez, Josep M. Casacuberta, Ugo Bastolla and Crisanto Gutierrez DOI: 10.1101/gr.240986.118 Genome Res. 2019. 29: 784-797
Eukaryotic genome replication depends on thousands of DNA replication origins (ORIs). A major challenge is to learn ORI biology in multicellular organisms in the context of growing organs to understand their developmental plasticity. We have identified a set of ORIs of Arabidopsis thaliana and their chromatin landscape at two stages of post-embryonic development. ORIs associate with multiple chromatin signatures including transcription start sites (TSS) but also proximal and distal regulatory regions and heterochromatin, where ORIs colocalize with retrotransposons. strong ORIs have high GC content and clusters of GGN trinucleotides. Development primarily influences ORI firing strength rather than ORI location. ORIs that preferentially fire at early developmental stages colocalize with GC-rich heterochromatin, but at later stages with transcribed genes, perhaps as a consequence of changes in chromatin features associated with developmental processes. Our study provides the set of ORIs active in an organism at the post-embryo stage that should allow us to study ORI biology in response to development, environment, and mutations with a quantitative approach. In a wider scope, the computational strategies developed here can be transferred to other eukaryotic systems.
OLV
June 25, 2019
June
06
Jun
25
25
2019
06:30 PM
6
06
30
PM
PDT
PS, 113, I note that for some cases, once the config space is big enough, atomic resources of sol system or observed cosmos are insufficient to carry out a search that rises above rounding down to zero. 500 - 1,000 bits suffices. At that point, needle in haystack challenge already makes blind chance and/or mechanical necessity maximally implausible without explicit probability estimates. And if one suggests a golden search, a search in a config space is a subset sampled, so a higher order search for a golden search is a search from the power set, where for 500 bits, the log of the power set's cardinality is nearly 10^150. I don't try to give the actual number, that's calculator smoking territory, indeed when I asked an online big num calculator to spit it out for me, it complained that it cannot handle a number that large. Hence, the Dembski point that search for search is exponentially harder than direct search. The FSCO/I result is hard to evade, just like fine tuning.kairosfocus
June 24, 2019
June
06
Jun
24
24
2019
08:46 PM
8
08
46
PM
PDT
GP @ 22, welcome back, we missed you. I note, there are dynamic-stochastic systems that blend chance and necessity, with feedback and lags bringing memory and reflexive causal aspects. They are a refinement, they don't change the main point. KFkairosfocus
June 24, 2019
June
06
Jun
24
24
2019
08:38 PM
8
08
38
PM
PDT
John_a_designer: Of course I agree with you. Given living cells, with their complex systems already existing, the probability of a new functional protein will be linked mainly to the probability of getting the right sequence of nucleotides in a protein coding gene. Which is, however, astronomically small for almost all proteins. I have said here many times that even one complex protein is enough to falsify darwinism. The most difficult aspect in computing functional complexity for observed proteins is to estimate the target space. The search space is easy enough, and for all practical purposes it can be equaled to 20^n, where n is the number of aminoacids in the observed protein. But the target space, IOWs the number of those sequences that could still perform the function we observe at a biologically relevant level, is much more difficult to estimate. I am not familiar with Sauer's method, that you quote. I will try to give a look at it, it seems interesting. I have quoted here many times Durston's method, based on conservation in protein families. And of course I have used many times, in detail, a method developed by me, and inspired to similar ideas as Durston's, based on homologies conserved for long evolutionary times. Using that method, for example, I have shown here: https://uncommondescent.com/intelligent-design/the-amazing-level-of-engineering-in-the-transition-to-the-vertebrate-proteome-a-global-analysis/ that "more than 1.7 million bits of unique new human-conserved functional information are generated in the proteome with the transition to vertebrates". So, if one single protein is enough to falsify darwinism, 1.7 million bits of new, original functional information generated in a specific evolutionary event, in a time window of a few million years at most, is its final death certificate. And this is just about protein coding sequences, without considering the huge functional information arising in the epigenome, in all regulatory parts, and so on. I am really happy that I must not defend the neo-darwinian theory. Of course Intelligen Design is the only reasonable approach, it's as simple as that.gpuccio
June 23, 2019
June
06
Jun
23
23
2019
09:45 AM
9
09
45
AM
PDT
Gpuccio @ 102:
Of course the function is defined in a context. There is no problem with that. However, the functional information corresponds to the minimal number of bits necessary to implement the function. The function definition will include the necessary context. For example, helicase will be defined as a protein that can “separate two annealed nucleic acid strands (i.e., DNA, RNA, or RNA-DNA hybrid) using energy derived from ATP hydrolysis” (from Wikipedia), of course in cells with nucleic acids and ATP.
The DNA Helicase is composed of 3 polymers that contain 14 chains (454 amino acid residues long). https://cbm.msoe.edu/crest/ePosters/16DNAHelicase4ESV.html What is the probability that DNA Helicase could originate by chance? Below Steven Meyer has elucidated a method by which we can calculate the probability of a single protein originating by chance alone.
Various methods of calculating probabilities have been offered by Morowitz, Hoyle, Cairns-Smith, Prigogine, Yockey and more recently, Robert Sauer… First, all amino acids must form a chemical bond known as a peptide bond so as to join with other amino acids in the protein chain. Yet in nature many other types of chemical bonds are possible between amino acids; in fact, peptide and non-peptide bonds occur with roughly equal probability. Thus, at any given site along a growing amino acid chain the probability of having a peptide bond is roughly 1/2. The probability of attaining four peptide bonds is: (1/2 x 1/2 x 1/2 x 1/2)=1/16 or (1/2)4. The probability of building a chain of 100 amino acids in which all linkages involve peptide linkages is (1/2)100 or roughly 1 chance in 10^30th. Second, in nature every amino acid has a distinct mirror image of itself, one left-handed version or L-form and one right-handed version or D-form. These mirror-image forms are called optical isomers. Functioning proteins tolerate only left-handed amino acids, yet the right-handed and left-handed isomers occurs in nature with roughly equal frequency. Taking this into consideration compounds the improbability of attaining a biologically functioning protein. The probability of attaining at random only L-amino acids in a hypothetical peptide chain 100 amino acids long is again (1/2)100 or roughly 1 chance in 10^30th. The probability of building a 100 amino acid length chain at random in which all bonds are peptide bonds and all amino acids are L-form would be (1/4)100 or roughly 1 chance in 10^60th (zero for all practical purposes given the time available on the early earth). Functioning proteins have a third independent requirement, the most important of all; their amino acids must link up in a specific sequential arrangement just the letters in a meaningful sentence must. In some cases, even changing one amino acid at a given site can result in a loss of protein function. Moreover, because there are twenty biologically occurring amino acids the probability of getting a specific amino acid at a given site is small, i.e. 1/20. (Actually the probability is even lower because there are many non-proteineous amino acids in nature). On the assumption that all sites in a protein chain require one particular amino acid, the probability the probability of attaining a particular protein 100 amino acids long would be (1/20)100 or roughly 1 chance in 10^130th. We know now, however, that some sites along the chain do tolerate several of the twenty proteineous amino acids, while others do not. The biochemist Robert Sauer of M.I.T has used a technique known as "cassette mutagenesis" to determine just how much variance among amino acids can be tolerated at any given site in several proteins. His results have shown that, even taking the possibility of variance into account, the probability of achieving a functional sequence of amino acids in several functioning proteins at random is still "vanishingly small," roughly 1 chance in 10^65th an astronomically large number. (There are 10^65th atoms in our galaxy).
http://www.arn.org/docs/meyer/sm_origins.htm Actually I believe that the probability for a 100 aa protein forming by chance would be 10^30th + 10^30th + 10^130th = 10^190th, according to Meyer, or 10^125th, according to Sauer. For some reason Meyer doesn’t give us the grand total-- a chance probability that for all intents and purposes is impossible. The probability that 454 aa helicase forming by chance is therefore absolutely staggering. Someone else can do the calculation. I won’t because it would be pointless to do so. Again, helicase forming by chance in isolation would have no function. Helicase’s function depends on the existence of the system of which it is a part and that technically involves the entire cell. Therefore, if we really want grasp the probabilities we need to calculate the probability of a basic prokaryotic cell. I believe Harold Morowitz has already done something like that. The number is “astronomical.” Please note, I am not arguing this is necessarily true of every function within the cell. For example, the bacterial flagellum adds the function of motility to the cell but since there are many prokaryotes which lack motility it is obviously not essential for survival. On the other hand, the flagellum is not constructed out of a single protein. Are the any stand-alone single function proteins which add functionality to the cell? I am not suggesting that there are not. I just can’t think of any.john_a_designer
June 23, 2019
June
06
Jun
23
23
2019
06:27 AM
6
06
27
AM
PDT
Genome structure and function are intimately linked. the nuclear architecture of rod photoreceptors differed fundamentally in nocturnal and diurnal mammals. The rods of diurnal retinas, similar to most eukaryotic cells, had most heterochromatin situated at the nuclear periphery with euchromatin residing toward the nuclear center. In contrast, the rods of nocturnal retinas displayed a unique inverted pattern with the heterochromatin localized in the nuclear center, whereas the euchromatin and nascent transcripts and splicing machinery lined the nuclear periphery. This inverted pattern was formed by remodeling of the conventional pattern during terminal differentiation of rods. the inverted rod nuclei acted as collecting lenses, and computer simulations indicated that columns of such nuclei channel light efficiently toward the light?sensing rod outer segments. Thus, nuclear organization displays plasticity that can adapt to specific functional requirements. Understanding the mechanisms that underlie the nuclear structural order and its perturbations is the focus of many studies. We do not have a complete understanding; however, a few key mechanisms have been described. Introduction to the special issue “3D nuclear architecture of the genome” Sabine Mai https://doi.org/10.1002/gcc.22747 Genes, Chromosomes and CancerVolume 58, Issue 7OLV
June 22, 2019
June
06
Jun
22
22
2019
04:06 PM
4
04
06
PM
PDT
Cells establish and sustain structural and functional integrity of the genome to support cellular identity and prevent malignant transformation. Physiological control of gene expression is dependent on chromatin context and requires timely and dynamic interactions between transcription factors and coregulatory machinery that reside in specialized subnuclear microenvironments. Multiple levels of nuclear organization functionally contribute to biological control... Cells establish and retain structural and functional integrity of the genome to support cellular identity and prevent malignant transformation. Mitotic bookmarking sustains competency for normal biological control and propetuates gene expression associated with transformed and tumor phenotypes. Elucidation of mechanisms that mediate the genomic organization of regulatory machinery will provide novel insight into control of cancer?compromised gene expression. Higher order genomic organization and epigenetic control maintain cellular identity and prevent breast cancer A.J. Fritz N.E. Gillis D.L. Gerrard P.D. Rodriguez D. Hong J.T. Rose P.N. Ghule E.L. Bolf J.A. Gordon C.E. Tye J.R. Boyd K.M. Tracy J.A. Nickerson A.J. van Wijnen A.N. Imbalzano J.L. Heath S.E. Frietze S.K. Zaidi F.E. Carr J.B. Lian J.L. Stein G.S. Stein https://doi.org/10.1002/gcc.22731 Genes, Chromosomes and CancerVolume 58, Issue 7 https://onlinelibrary.wiley.com/doi/full/10.1002/gcc.22731OLV
June 22, 2019
June
06
Jun
22
22
2019
03:50 PM
3
03
50
PM
PDT
A major question in cell biology is how cell type identity is maintained through mitosis. We are only starting to understand the mechanisms by which epigenetic information contained within the vertebrate chromatin is transmitted through mitosis and how this occurs in the context of a mitotic chromosome conformation that is dramatically different from interphase. One important question that remains unanswered is how molecular details of epigenetic bookmarks are read in early G1 and enable re-establishment of cell type specific chromatin organization. Insights into these processes promise not only to lead to mechanistic understanding of mitotic inheritance of cell type specific chromatin state, they will also reveal how the spatial organization of interphase chromosomes is determined in general by the action of cis-acting elements along the chromatin fiber. This will also lead to a better understanding of what epigenetic mechanisms underlie processes in which cell type identity is changed, for example in stem cell differentiation or in diseases that result in cancer development and aging. It will be very interesting to explore the pathways and mechanisms that are used to initiate epigenetic changes in cellular phenotype, how differences between sister chromatids are established and proper sister segregation is controlled. Epigenetic Characteristics of the Mitotic Chromosome in 1D and 3D Marlies E. Oomen and Job Dekker Crit Rev Biochem Mol Biol. 2017 Apr; 52(2): 185–204. doi: 10.1080/10409238.2017.1287160 PMCID: PMC5456460 NIHMSID: NIHMS863269 PMID: 28228067OLV
June 22, 2019
June
06
Jun
22
22
2019
03:34 PM
3
03
34
PM
PDT
A layer of regulatory information on top of DNA is proving to be as important as genes for development, health and sickness. To explain how the epigenome works, some have likened it to a symphony: the sheet music (genome) is the same, but can be expressed in vastly different ways depending on the group of players and their instruments (epigenome). Human DNA in a single cell is enormously long-six feet-and folds with proteins into packages (chromatin) to fit within a nucleus. https://inside.salk.edu/summer-2016/epigenomics/OLV
June 22, 2019
June
06
Jun
22
22
2019
03:17 PM
3
03
17
PM
PDT
Here’s an article from the Stanford Encyclopedia of Philosophy: Levels of organization are structures in nature, usually defined by part-whole relationships, with things at higher levels being composed of things at the next lower level. Typical levels of organization that one finds in the literature include the atomic, molecular, cellular, tissue, organ, organismal, group, population, community, ecosystem, landscape, and biosphere levels. References to levels of organization and related hierarchical depictions of nature are prominent in the life sciences and their philosophical study, and appear not only in introductory textbooks and lectures, but also in cutting-edge research articles and reviews. In philosophy, perennial debates such as reduction, emergence, mechanistic explanation, interdisciplinary relations, natural selection, and many other topics, also rely substantially on the notion. Yet, in spite of the ubiquity of the notion, levels of organization have received little explicit attention in biology or its philosophy. Usually they appear in the background as an implicit conceptual framework that is associated with vague intuitions. Attempts at providing general and broadly applicable definitions of levels of organization have not met wide acceptance. In recent years, several authors have put forward localized and minimalistic accounts of levels, and others have raised doubts about the usefulness of the notion as a whole. There are many kinds of ‘levels’ that one may find in philosophy, science, and everyday life—the term is notoriously ambiguous. Besides levels of organization, there are levels of abstraction, realization, being, analysis, processing, theory, science, complexity, and many others. Although ‘levels of organization’ has been a key concept in biology and its philosophy since the early 20th century, there is still no consensus on the nature and significance of the concept. In different areas of philosophy and biology, we find strongly varying ideas of levels, and none of the accounts put forward has received wide acceptance. At the moment, the mechanistic approach is perhaps the most promising and acclaimed account, but as we have seen, it may be too minimalistic to fulfill the role that levels of organization continue to play in biological theorizing. https://plato.stanford.edu/entries/levels-org-biology/#ConcRemaOLV
June 22, 2019
June
06
Jun
22
22
2019
02:14 PM
2
02
14
PM
PDT
Gpuccio @56 Yet one of us is wrong and leading others astray. But you're not even curious, let alone interested in determining the truth. Nice going!Nonlin.org
June 22, 2019
June
06
Jun
22
22
2019
11:18 AM
11
11
18
AM
PDT
gpuccio,
Where should we “stop”? We stop when, after having measured the functional information for some function, and finding it high enough (for example, more than 500 bits), we infer design for the object.
This point might now be moot, but if we simply wanted to test the null hypothesis of no design, we don't really need to transform the probability to units of functional information via the -log_2 function, do we? Using the two numbers 10^-20 and 10^9, we can show the p-value is tiny and therefore reject H_0. After some reflection, I guess it's convenient to frame this all in terms of bits of functional information and probabilistic resources. The numbers (66 bits, e.g.) turn out to be easier to work with, anyway.daveS
June 22, 2019
June
06
Jun
22
22
2019
09:22 AM
9
09
22
AM
PDT
1 2 3 5

Leave a Reply