Uncommon Descent Serving The Intelligent Design Community

Hidden Codes Within Codes….

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

 

The New York Times is reporting here on a discovery published in Nature of of a second code hidden in DNA.  According to the author,

“In the genetic code, sets of three DNA units specify various kinds of amino acid, the units of proteins.  A curious feature of the code is that it is redundant, meaning that a given amino acid can be defined by any of several different triplets.  Biologists have long speculated that the redundancy may have been designed so as to coexist with some other kind of code, and this, Dr. Segal said, could be the nuclesome code.”

Oops, he used the “D” word.  But let’s look beyond that  for the moment and think about what it would take - in terms of random processes – for DNA to form itself and code itself by random mutations to produce proteins that transcribe specific overlapping segments that are themselves hidden or revealed depending on how still other proteins fold the long DNA strand.  Codes within codes, weaved together in a lattice of folded DNA that operate like machines programmed to reconfigure themselves depending on context.  Want a liver cell - fold here.  Muscle, fold there. Like comic book Transformers, but infinitely more complex.  Cell types set contextual limits that are determined by a mutating but self-correcting proceedure of replication and type determination.  Start with a single sperm cell and egg cell and watch them unite – only to then split into the myriad cell types making up complex organisms in exactly the right time and place to enable the growth of an entire being.  The determination of each cell type is controlled by this hidden code whose interpretation is dependent upon how the DNA strand is folded.  And what deterimines the folding of the DNA?  Why, the other code, of course.

I can see why the author, one Nicholas Wade, was probably thinking “evolved” but wrote “been designed”.  Or is it too much to hope that NYT would actually publish an ID-sympathetic statement?

Comments
Tom said, "Although there is such a thing as a lattice code, codes cannot be weaved together into a lattice, and they do not operate as machines do. Check out Wiki..." I checked wiki, lattice group: http://en.wikipedia.org/wiki/Lattice_%28group%29, which led to Coding Theory: http://en.wikipedia.org/wiki/Coding_theory, that states... "Coding theory is a branch of mathematics and computer science dealing with the error-prone process of transmitting data across noisy channels, via clever means, so that a large number of errors that occur can be corrected. It also deals with the properties of codes, and thus with their fitness for a specific application." At bottom, it leads to TCP/IP. The weaving is done in the packeting of information based upon mathematical principles related to data compression algorithms between sender/receiver. Also note the link to Information Theory. I guess chromosomes can be thought of as packets of information for long term passage of hereditary genes thru generations of time to go along with bandwidth analogy? However, Tom and Salvador, Thinking over comments on bandwidth communication. At first I liked the analogy, but given more thought I turned to simple mechanical devices, like a cam shaft in a 37 Ford Roadster: http://www.wildrodfactory.com/images/2k4_roadstermauve.jpg. Or a mechanical music box: http://www.mechanicalmusic.co.uk/ Obviously not, so the best analogy I can think of is old DOS and IBM's VM(virtual machine)/370(with exception of dumb terminals). I relate it to memory storage of instruction sets and data on circular disk. Maybe this is a new protocol discovery found along with access methods for the actual partitioning of data and packets on the double helix in comparison? Bandwidth appears to be enhanced by this papers discussion points, not negatively impacted. I think multiple discoveries are made in relation to: protein folds, sequence counts, and binding sites. This is possibly being overlooked, or maybe I'm overstating the actual implication of the study? It appears to be part compression/part key. The folding of DNA within the cell itself reduces physical bandwith requirements for portability across multiple passage ways, as opposed to elongation which would cause nightmarish delivery systems and incredible cell size. We'd have to be giants :). That's where I see bandwith issues. What I don't understand is how this could occur naturally, without initial design - even over billions of years. Establishing a protocol within or on top of existing structures does not correlate to expansion, but contributes to its compression: from the Nature article... "As expected for a nucleosome–DNA interaction model, the resulting model exhibits distinctive sequence motifs that recur periodically at the DNA helical repeat and are known to facilitate the sharp bending of DNA around the nucleosome3. These include approx10-bp periodic AA/TT/TA dinucleotides that oscillate in phase with each other (Fig. 1b) and out of phase with approx10-bp periodic GC dinucleotides."(emphasis mine) If this is correct, is this not compression? Its remarkably cool stuff from an engineering and design perspective. I associate physical bandwidth with cells and chemical messages to core centers. This is reflected in the pathways to the central nervous system, from nerve endings as well as circulatory systems for blood or reproductive systems and spinal cord and neural net for the brain. The cell itself is remarkable example of compression technology. And this research illuminates partly - How its accomplished. Not a piggy back ride intruding upon bandwidth, but a beautifully designed hierarchical system with reference check points. Packet tracking info much like DOS or IBM VM/370 VSAM accessing multiple spinning platters on a 3390(if anyone remembers this architecture). This analogy is the only way I can relate such complexity of broad functions to copying, repair, data storage, instruction sets(storage), and communication alert mechanisms. Its a self-contained energy processor with self-replication and storage abilities that have lasted 4 billion years ~ according to some estimations. Finally, to go farther into the rabbit hole..., Maybe two connector keys(sender/receiver) between proteins at binding sites are similar in distant analogical form to a symmetric key block cipher like Two Fish encryption? http://en.wikipedia.org/wiki/Twofish_encryption_algorithm The binding sites are not complex, but the simple sharing techniques of sender/receiver chemical expressions maintain a similar role as keys. Well, maybe I just have a nostaligia for old computer design paradigms.Michaels7
July 28, 2006
July
07
Jul
28
28
2006
06:46 AM
6
06
46
AM
PDT
Well atleast he didn't used "GD" (God designed :) )Smidlee
July 26, 2006
July
07
Jul
26
26
2006
03:05 PM
3
03
05
PM
PDT
Tom, I think your analogy of a communications channel is a good one. If you think of this new code as the carrier and the DNA molecule as the modulator then it would start to make sense. It is said that the nucleosome has an affinity for a particular sequences; by controlling the placement of these sequences you can control the placement of the nucleosomes. What is the significance of this?
This nucleosome positioning code may facilitate specific chromosome functions including transcription factor binding, transcription initiation, and even remodelling of the nucleosomes themselves.
Now the analogy of frequency modulation becomes clear. When you are listening to your favorite music or talk show on the FM band of your radio, that sound has been encoded and transmitted to your radio by piggybacking on a specific frequency. The encoding of the music or voice is done by varying the carrier frequency, which your radio then decodes by detecting that variation. It would seem that by modulating the frequency of these binding sites preferred by the nucleosome, it is possible to control the information that gets outputted. From an ID hypothesis point of view, you would predict an engineer to design the carrier and modulator concurrently and not by chance.teleologist
July 26, 2006
July
07
Jul
26
26
2006
09:39 AM
9
09
39
AM
PDT
Since I was the main ID poster on that thread, you look at an example of how I was modded here: http://slashdot.org/~geoffrobinson My main post was 0, Flamebait. 20% Flamebait. 40% Overrated. 40% Insightful. Tons of replies. I'm not griping. It probably was flamebait, since I knew how people would react to my post. But I'm not going to lie. I had fun.geoffrobinson
July 26, 2006
July
07
Jul
26
26
2006
08:09 AM
8
08
09
AM
PDT
bdelloid said: “This is misuse of the word code on the part of these researchers for the purpose of getting their paper written up in the New York Times. Their “code” is simply a probabalistic/statistical association of certain DNA sequences with nucleosome positioning.” Besides showing a certain bias, your comment is not precise enough. It is true that the tRNA/protein complex that translates the codon will bind only to one codon. However, there are plenty of other “codes” in the DNA with less stringent requirements. For example, the short stretch of DNA to which transcription factors bind can deviate in several places from the “ideal” or consensus nucleotide sequence. In the same way, you can consider the binding sites for nucleosomal proteins a consensus. All that is required that there are enough contacts made with the DNA among the larger number of available sites on the nucleosome. And, what is not mentioned, the number of contact sites determines the stability of binding, so there could be nucleosomal sites are somewhat less stable than others, permitting a more dynamic picture of nucleosome binding along the chromosome. How is that for an additional “design” feature?ofro
July 26, 2006
July
07
Jul
26
26
2006
05:54 AM
5
05
54
AM
PDT
I always get a kick out of Darwinists when they try, in desperation, to posit a materialistic explanation for the specified code we observe. It is telling in regards to the significance of underlying worldviews in this debate.Scott
July 26, 2006
July
07
Jul
26
26
2006
05:23 AM
5
05
23
AM
PDT
Pierre Grasse commented on Crick as follows: "But according to Darwinian doctrine and Crick's central dogma, DNA is not only the depository and and distributor of the information but its SOLE CREATOR. I do not believe this to be true." Evolution of Living Organisms, page 224, his emphasis Neither do I Pierre. I believe the information was front-loaded long ago by one or more BFLs (Big Front Loaders). I do not wnat to leave the impression that I agree entirely with Grasse however. For example he also says: "Any aquisition of new information requires a structural change, something added. It is not at all a matter of altering or supressing one or more preexisting units, but of ADDING MORE. ibid, page 224, his emphasis I agree with the first part about "requires a structural change" which is implicit in the Prescribed Evolutionary Hypothesis (PEH) but I am at a loss as to how specific information can heve been added. Accordingly I have been hesitant to postulate that process as a requirement. Of course he may be correct but I have a problem seeing where this new information came from and how it got into the genome. It would seem to demand an intervening God which Grasse has rejected as the following indicates: "Let us not invoke God in realities in which He NO LONGER HAS TO INTERVENE. The single absolute act of creation was enough for Him." ibid, page 166, his emphasis So you see we are faced with a dilemma. Quite frankly I don't know the answer. Nevertheless I am not prepared to demand anything not demonstrated to be absolutely necessary to explain phylogeny. If Grasse were correct, one might anticipate that advanced and complex organisms might have more DNA than the more primitive ones but that is most certainly not the case. Quite the contrary, it is the more primitive and less specialized creatures that have the higher amounts of DNA per cell. The metabolically active birds, mammals and flying insects all have small cells with relatively low DNA amounts. One can even make a case that evolution has involved a loss of information rather than a gain, a progressive narrowing of potentialities. That is clearly a feature of ontogeny which remains a productive model for phylogeny. I see no evidence for any new information being introduced during the course of the development of the individual and so I heve instinctively assumed that such was not necessary for evolution either. That does not mean that I am right and I have great respect for Grasse as I do for all my anti-Darwinian sources. Speaking of ontogeny and phylogeny: "Neither in the one nor in the other is there room for chance." Leo Berg, Nomogenesis, page 134 If not chance then what? I say the PEH. "A past evolution is undeniable, a present evolution undemonstrable." John A. DavisonJohn A. Davison
July 26, 2006
July
07
Jul
26
26
2006
12:20 AM
12
12
20
AM
PDT
Salvador, I read "Randomness by Design" a couple years ago. I wish I had time to reread it now, but I am working on a second reading of "Specification." As for the Knuth anecdote, many of us have sinned and fallen short of the glory of von Neumann. I have often used used short random number generators with short seeds to produce very long sequences. "I think Randomness by Design has a role by the designer to help us make design inferences, such as codes within codes, imho." So the designer wants to help us see that the designer has been at work? I suppose you are just musing on the motive of the designer. Personally, I think that anyone who accepts the reality of the designer because of a design inference has built his house on sinking sand. "We would be hard pressed to see design like codes within codes in biology, if we did not have some concept of randomness in our experience, IMHO." If "randomness" is deterministic chaos, then one could argue, I think, that everything is designed.Tom English
July 26, 2006
July
07
Jul
26
26
2006
12:18 AM
12
12
18
AM
PDT
DaveW wrote:
[Slashdot's] ranking scheme for posts seems to favor anti-ID comments. While I was able to see responses to pro-ID/Creation comments, the comments themselves were “modded down.” It’d be interesting to test the moderators to see if there’s a real bias going on there.
I'm a regular Slashdot reader, and I've noticed with some dismay that the moderation system there pretty much guarantees a groupthink mentality whenever the subject of evolution comes up. You'd be better off posting at PT -- at least your comments wouldn't be modded into oblivion there. (Actually pro-ID comments are not so much "modded down" as ignored. Most comments on Slashdot start with a score of 1 or 2 depending on the poster's "karma". Anyone praising Darwin or the Flying Spaghetti Monster or showing contempt for ID is quickly modded up by the Slashdot drones to 4 or 5. Since most readers set their filter at a fairly high level, the pro-ID comments are effectively disappeared.)sagebrush gardener
July 25, 2006
July
07
Jul
25
25
2006
09:59 PM
9
09
59
PM
PDT
Tom! Hey, you're post reminded me of something. Randomness by Design. I hope you get a chuckle out of the Knuth anecdote. I think Randomness by Design has a role by the designer to help us make design inferences, such as codes within codes, imho. We would be hard pressed to see design like codes within codes in biology, if we did not have some concept of randomness in our experience, IMHO. Salvadorscordova
July 25, 2006
July
07
Jul
25
25
2006
09:17 PM
9
09
17
PM
PDT
[indent]I don’t think they can demonstrate scientifically that any mutations are random.[/indent] There are physicists who say there is no randomness in the universe, only high-dimensional deterministic chaos passing for randomness. The present debate over neo-Darwinism would end pronto if everyone agreed to that notion. The random mutation of neo-Darwinism would become highly-unpredictable deterministic mutation, and chance would no longer be an element in evolution. The intelligent designer would have her place at the origin of the universe.Tom English
July 25, 2006
July
07
Jul
25
25
2006
09:00 PM
9
09
00
PM
PDT
"Codes within codes, weaved together in a lattice of folded DNA that operate like machines programmed to reconfigure themselves depending on context." Although there is such a thing as a lattice code, codes cannot be weaved together into a lattice, and they do not operate as machines do. Check out Wikipedia. Let's stick with the coding analogy and see if we can't makes things just a bit less mystical. In the "standard" genetic code there are 64 codons (codewords) to encode a mere 20 amino acids. Every codon decodes to an amino acid. All but two of the amino acids are encoded by multiple codons. If we think of a chromosome as a communications channel through which messages of amino acid sequences are transmitted, the encoding of messages wastes bandwidth. (Bandwidth is also wasted because the code does not exploit the differences in probability of different sequences of amino acids.) The wasted bandwith can be (apparently is) exploited to transmit secondary messages through the channel along with the primary messages. Here's an ID hypothesis for you: I predict that no communications engineer will see evidence for co-design of the two codes. Instead, he or she will see clearly that the code for amino acids existed first, and that the second code was cobbled together later to exploit the inefficiency of the first code.Tom English
July 25, 2006
July
07
Jul
25
25
2006
08:20 PM
8
08
20
PM
PDT
This is misuse of the word code on the part of these researchers for the purpose of getting their paper written up in the New York Times. Their "code" is simply a probabalistic/statistical association of certain DNA sequences with nucleosome positioning. Unlike the genetic code, which has an output of 20 amino acids determined precisely by a series of three nucleotides, theirs is a binary output: "present" or "absent". And then, the prediction of "present" or "absent" based on their "code" is only guaranteed to be correct 50 % of the time. This is a case of PR getting ahead of the details.bdelloid
July 25, 2006
July
07
Jul
25
25
2006
07:23 PM
7
07
23
PM
PDT
I don't think they can demonstrate scientifically that any mutations are random. Not that there aren't random mutations, I just don't think they can demonstrate it. There's a thread about it here, if you are interested: http://www.apologetics.org/phpBB2/viewtopic.php?t=420RyanL
July 25, 2006
July
07
Jul
25
25
2006
06:48 PM
6
06
48
PM
PDT
I was surprised when I read the article and came across the "D' word too. This is somewhat off topic, but I followed the link to slashdot and I noticed that their ranking scheme for posts seems to favor anti-ID comments. While I was able to see responses to pro-ID/Creation comments, the comments themselves were "modded down." It'd be interesting to test the moderators to see if there's a real bias going on there.DaveW
July 25, 2006
July
07
Jul
25
25
2006
06:32 PM
6
06
32
PM
PDT
"I can see why the author, one Nicholas Wade, was probably thinking “evolved” but wrote “been designed”. Or is it too much to hope that NYT would actually publish an ID-sympathetic statement?" The power of the apparant of design in biology is unavoidable, however, I think that many scientists believe that evolution has the power to design things. As Frances Crick famously said, “The biologists need to remember constantly that what we see were not designed, but that is the product of the evolution.” Of course, in Frances Crick's case, he also wants you to remember that it evolved with DNA that was brought to Earth in spaceships by aliens.Jehu
July 25, 2006
July
07
Jul
25
25
2006
06:24 PM
6
06
24
PM
PDT
John Davison - As I read this press release I couldn't help but think of your Manifesto. It's good to see you back here at UD.dougmoran
July 25, 2006
July
07
Jul
25
25
2006
06:02 PM
6
06
02
PM
PDT
Detecting code means detecting design! The Pattern recogniztion, Explanatory Filters are superior for elucidating biology compared to natural selection. Salvadorscordova
July 25, 2006
July
07
Jul
25
25
2006
05:28 PM
5
05
28
PM
PDT
Doug, I was about to post this below, but checked just in case to see if yaaa'll posted it yet.... Amazing how fast the info travels the world now... Weizemann Institute; Eran Segal, and Northwestern University; Jonathan Widom believe a second code has been found in DNA: from Jerusalem: http://www.jpost.com/servlet/Satellite?cid=1153291988211&pagename=JPost%2FJPArticle%2FShowFull to India: http://timesofindia.indiatimes.com/articleshow/1808688.cms from nerdville: http://science.slashdot.org/science/06/07/25/1438229.shtml to supernerdville: Nature has the full article up online! surprise suprise! http://www.nature.com/nature/journal/vaop/ncurrent/full/nature04979.html Notice how Slashdot commenters recognize it as a form of DRM. They immediately see the correlations between digital coded management and DNA management by a second code. Is this not the result of utilizing pattern recognition? Computational probability mathematics applied to micro structures? And is it not also recognition of Hierarchical Nested Logic? Which I learned about 20 years ago in a basic computer logic course. I never learned about heirarchical codes in biology or chemistry class. Do they teach that now? What appears to be RAM, is never really RAM. It exist within known boundaries of control by code in the OS. Fascinating.... This also reminds me of Abel and Trevors paper regarding FSC. I wonder if anyone has answered their Null Hypothesis challenge. It would be a good paper to discuss openly here I think because know one as far as I know has successfully challenged them. From a computer background, I cannot understand how Random Mutation and Natural Selection could form such nested logic structures. A code within a code, which exercises rights recognition? Having kept up with some of the writing on Protein Folds, Knot Theory, etc., I was aware scientist expected to find some other avenue to control such access. But I hardly see how RM&NS could ever predict it, let alone allow it. The Design Paradigm if forging ahead...Michaels7
July 25, 2006
July
07
Jul
25
25
2006
04:32 PM
4
04
32
PM
PDT
These are exciting times for science. Hardly a week goes by without some new discovery shedding more light on the unimaginably intricate machinery of life. On top of that we are living through, and even participating in, the biggest scientific controversy of the last hundred years or more. I'm in awe.sagebrush gardener
July 25, 2006
July
07
Jul
25
25
2006
04:32 PM
4
04
32
PM
PDT
It should be noted that one doesn't even need a sperm cell. All the necessary information to produce a unique organism is contained in the ovum alone and in frogs that information includes the capacity to produce both functional sexes. These are not speculations but proven experimental demonstrations. I refer to my unpublished Manfesto for the details and the pertinent literature. Actually, the primary role for sexual reproduction always was and still is to bring evolution to a screeching halt thereby ensuring ultimate extinction. After all if there had been no extinction there could never have been any evolution. This is it folks. It is all over but the shouting. "If in danger or in doubt, Run in circles, scream and shout!" anonymous It is hard to believe isn't it? "A past evolution is undeniable, a present evolution undemonstrable." John A. DavisonJohn A. Davison
July 25, 2006
July
07
Jul
25
25
2006
04:23 PM
4
04
23
PM
PDT

Leave a Reply