Uncommon Descent Serving The Intelligent Design Community

80 megabytes seems too small to specify a human

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Dan Graur
dan graur

and Larry Moran argue that most of the human genome of 3.2 giga base pairs is junk. I will appeal to engineering intuition and say these guys are awfully premature in their pronouncements since their estimates would imply that a mere 80 megabytes would be enough specify not only an adult human but all the developmental forms that have to be implemented along the way from conception to adulthood.

Where did I get the 80 megabyte number? The human genome is about 3.2 giga base pairs. Evolutionnews reports Graur is arguing 5% to 15% of the human genome is functional. For simplicity I’ll suggest the mid range figure from Graur as 10%. That means then 3.2 billion * 10% = 320 million base pairs. Since each nucleotide from one strand associated with a base pair has 2 Shannon bits, that’s 320 x 2 = 640 million bits. There are 8 bits in a byte so 640,000,000 / 8 = 80 megabytes.

From the BioMumbers website, human cells are reported to have on the order of 1 to 10 billion physical proteins from a supposed set of only 20,000 coding genes. If a human has 1 billion physical proteins per cell and there are 200 trillion cells in a grown human, that implies coordination of 200 billion trillion proteins. 😯

Do you think 80 megabytes could provide enough information to assemble and manage not only 200 billion trillion proteins, but all the developmental forms along the way? If you were tasked to build something as fearfully and wonderfully made as a human with its complex systems (nervous, immune, digestive, optical, auditory, endocrine, respiratory, smell sensing, vascular, reproductive, skeletal, urinary, lymphatic, muscular, etc.), would you feel comfortable having only 80 megabytes to store all the construction information for an adult plus all the developmental forms? Not me, and not a lot reasonable people either.

In fact, 3.2 giga bases (or 800 megabytes) would even seem too small. My computer RAM has more than 800 megabytes, and even that doesn’t seem like it would be enough. For those reasons, I think DNA cannot possibly contain all the heritable information that is important to humans. For DNA to work for a human, it must presuppose information rich parts in the rest of the cell. I think there is a lot of heritable information outside of the DNA, and we’re just not doing the information accounting properly. Epigenetic information needs to be accounted for.

Without ribosomes, spliceosomes and host of other pre-existing machines, the DNA would be useless, so I suppose there is information implicit outside the genome which had to be specified even though it is not explicit. For example a compressed ZIP file presupposes a decoder to decompress it. That decoder has a certain amount of information associated with it.

If I were a designer and had only 800 megabytes of DNA to specify something as complex as a human, I would think I’d have to store lots of assembly instructions outside the DNA and implement some of the manufacturing process instructions outside of the DNA. If some of the manufacturing process instructions are implicitly stored outside of DNA, then that is yet more information we have to account for, and if we include that, I think it strains credulity that a mindless process could construct something as complex as a human.

Added to this, the human has moderate self-healing capabilities. How can you construct a system as complex as a healthy human with the added ability to self heal certain parts? I think Dan Gruar’s 80 megabyte argument is on shaky ground, unless he wants to argue for large amounts of epigenetic information. But arguing for large amounts of epigenetic information doesn’t exactly help his crusade against functionality.

He is quite confident he is right because he places his faith in evolutionism more than in reasonable engineering intuitions.

NOTES

1. photo credits Evolution News

Comments
Seems this thread is a lot about 80MB of code, but what says the human genome (or any genome at all) is in any way similar to computer code? In the case of human genomes, they engage in the biochemical process of cell division from the first fertilized eggcell in the uterus. Two, four, eight and so on, and at a certain level of division the cells embark on a long tour of diversification, with different genes switched on or off according to the a cells specific location in the embryo, and all the chemistry in the developing embroy. Is the womb a computer, executing the code? Of course not. Each cell has the entire genome at its disposal. Teh same code is used over and over again and again, so the attempt at equating cellular life with a computer program strikes me as a rather bizarre way of looking at life. This has all been clearly and beautifully explained in Carroll's "Endless Forms Most Beautiful." But I am not even surprised.Cabal
June 23, 2014
June
06
Jun
23
23
2014
08:49 AM
8
08
49
AM
PDT
D, looks like the habit of reverse engineering cloning set off the B29 took deep roots! Thanks for interesting links. KFkairosfocus
June 23, 2014
June
06
Jun
23
23
2014
04:14 AM
4
04
14
AM
PDT
No, but I did want to counter the bogus perception that producing multiple copies of a few proteins 20,000+ is the same as requiring billions of trillions of information bits. Because it is not.
And you are countering something I didn't say. You obviously prefer knocking down arguments I didn't make versus ones I did make. When I stated the problem confronting the limitation of 80 megabytes to assemble a human, I used the word "coordination" not "copy", or do you still have reading comprehension issues. "Coordination" isn't synonymous with copy in case you weren't aware. I said:
coordination of 200 billion trillion proteins
scordova
June 22, 2014
June
06
Jun
22
22
2014
10:27 PM
10
10
27
PM
PDT
Scar diva: "To manufacture so many components with such complexity involves a lot of process control instructions. It cannot be trivial as a matter of principle, but Acaratia_Bogart wants to give the impression his 2 line program some how shows a human can be specified by 80 megabytes of information." No, but I did want to counter the bogus perception that producing multiple copies of a few proteins 20,000+ is the same as requiring billions of trillions of information bits. Because it is not.Acartia_bogart
June 22, 2014
June
06
Jun
22
22
2014
10:12 PM
10
10
12
PM
PDT
Dan Graur, Thanks for the post. I am an experienced, patented, software developer who has written about 80 megabytes of code (compiled, discluding the automatically included libraries). I know what 80 megabytes of efficiently written code doesn't come close to making a human. Even when I look at the 800 megabytes that make of the total of DNA, I find it amazing, unfathomable, that it can make a human. When I look at some of the things DNA does, some of the ingenious data compression technologies (I think there are about 20,000 coding genes, but about 100,000 different kinds of proteins) I just marvel. Then I talk to bozos that doesn't even understand that the DNA is the code for a cybercomputer. They seem to think that not only is this amazing computer developed by random chance twiddling, but that every change along the way must, well, compile, and must produce an organism at least as good as its predecessor. Well, no. The day I can produce code that comes close to the class of that found in the simplest bacteria is the day people flock to me to learn how it is done. (I stick letters in a bottle and shake it, yup.)Moose Dr
June 22, 2014
June
06
Jun
22
22
2014
08:57 PM
8
08
57
PM
PDT
It’s not like Moran and Graur are just making some wild allegations about the amount of junk DNA. It’s not even something they discovered. It’s exactly what biology says. I.e., these are hard scientific data from molecular and population genetics.
They Encode consortium disagrees and so do many professors at Moran's own university. I'm not out on limb, they are...scordova
June 22, 2014
June
06
Jun
22
22
2014
08:50 PM
8
08
50
PM
PDT
It's not like Moran and Graur are just making some wild allegations about the amount of junk DNA. It's not even something they discovered. It's exactly what biology says. I.e., these are hard scientific data from molecular and population genetics.BM40
June 22, 2014
June
06
Jun
22
22
2014
08:43 PM
8
08
43
PM
PDT
27 Neil Rickert follow up to post # 31 IEEE's brief description of the part of the hearing system that is directly affected by the CI procedures:
Synapses of Cochlear Implants In the traditional hearing process, sound is transmitted through the outer ear as an acoustic wave. It impacts the tympanic membrane, which causes piston-like motions of the three bones in the middle ear cavity, which then vibrates a membranous structure called oval window. At that point, the resulting vibrations are transmitted to the fluid filled spiral chambers of the inner ear (i.e., cochlea). By the time the sound wave reaches the inner ear, reductions in the wave amplitude have occurred, as well as a wide band pass filtering process (designed to filter out irrelevant stimulation and to protect the inner ear from excessively loud stimuli). Once reaching the fluid media of the cochlea, the acoustic waves induce a traveling wave along the basilar membrane (BM) that runs along the entire length of the cochlea. The basilar membrane contains special microscopic structures called hair cells that are concentrated in a specific area called the organ of Corti. The hair cells sense the motion of the BM through an even smaller hair like apparatus (i.e., stereocillia and kinocilium) which are anchored to the BM. The shearing motions cause the stereocillia to open and close electrical channels along the hair cells, regulating the influx and efflux of the ions within the surrounding fluid. The opposite ends of the hair cells are connected to the receiving ends of the auditory nerves. The transmitting ends of the auditory nerves project to different portions of the brain stem and brain (e.g., cochlea nucleus, auditory cortex, etc.). When an electrical channel opens or closes at one end (e.g., the BM end) of the hair cell, a stream of chemical neurotransmitters is released into the auditory nerve fiber synapse. Once the neurotransmitter is released into the synapse a small electrical potential is elicited along the nerve body and transmitted to the brain for processing. Along the cochlea, the BM varies in stiffness. The base of the cochlea (near the oval window) is the stiffest region, which becomes more flexible as it spirals towards the apex. This varied stiffness serves as a spectrum analyzer along the cochlea. The hair cells closer to the base are specifically tuned to high frequency spectra while the hair cells closer to the apex are tuned to low frequency spectra. Interestingly, this spectral organization remains throughout the auditory nerve complex as well as through the brain stem and brain cortices (central auditory pathway). In fact, from the moment the sound stimuli are at the point of the auricle, they will be filtered, amplitude modulated, band pass filtered, low passed filtered, and then rectified while being transduced from a mechanical to electrical signal. This is an extraordinary engineering and physiological feat that we try to duplicate with cochlear implants. Thus far we have only mentioned the spectral processing of the auditory system. The reason why humans can acquire and process speech as well as other complex perceptions (e.g., music) is their ability to process sound in both spectral and temporal cues. Unfortunately, our current cochlear implants discard most temporal information. http://lifesciences.ieee.org/images/pdf/201301_ci.pdf
BTW, the above described sophisticated system, which appears to be designed, really came to be by the magic formula RV+NS+T and/or the 3rd way. (tongue in cheek)Dionisio
June 22, 2014
June
06
Jun
22
22
2014
06:32 PM
6
06
32
PM
PDT
Aside, 1 to power 10 billion is still 1. Sounds like the scientists that confidently explained the operation of the sun before nuclear fusion was discovered. The calculations showed they were wrong, but that would stop them from claiming to be the source of all knowledge.Peter
June 22, 2014
June
06
Jun
22
22
2014
04:01 PM
4
04
01
PM
PDT
27 Neil Rickert follow up to post # 31 reverse-engineering is a relatively old activity a few old 'reverse-engineering" stories: during the "cold war" the soviets reversed-engineered the IBM 360 and 370 mainframe systems, as well as some DEC PDP mini-computers, among other things they cloned back then: http://en.wikipedia.org/wiki/ES_EVM http://old.cistp.gatech.edu/programs/inter-diff-innov-info-tech/docs/The%20Soviet%20Bloc's%20Unified%20System%20of%20Computers.pdf http://lgm.cl/tvtropes/Main/Elektronika-60.htmlDionisio
June 22, 2014
June
06
Jun
22
22
2014
03:52 PM
3
03
52
PM
PDT
27 Neil Rickert
Consider the case of cochlear implants. The brain itself works out how to complete the wiring. And it cannot have been in the DNA, because cochlear implants are different from what is in the normal ear.
Thank you for bringing up this interesting example of reverse-engineering. I'll be back on this later. I think we can squeeze quite a bit of juice out of this ;-) In the web page associated with the link you provided I could read this:
The quality of sound is different from natural hearing, with less sound information being received and processed by the brain.
Note the highlighted text. I want to expand on this later. Basically, they have reverse-engineered the initial sensor, including the signal converter (air pressure to electrical impulses), which seems to be a function of the damaged cells in that area of the hearing system. This is quite an achievement. BTW, IEEE has much better information on this subject than Wikipedia, but that's fine for the moment. We could expand on that too later. Now I'm busy with other stuff, but definitely would like to chat with you on this subject.Dionisio
June 22, 2014
June
06
Jun
22
22
2014
02:34 PM
2
02
34
PM
PDT
Roy states that "I have never seen anyone claim that their disbelief in any god is an argument, I can see no point to your question,," Really??? Dawkins, after he quoted his infamous argument from evil from 'the God Delusion' said the probability of God not existing was much higher than 50% Ben Stein vs. Richard Dawkins Interview https://www.youtube.com/watch?v=GlZtEjtlirc As the preceding video shows, Atheists make (bad) theistic arguments all the time as to their disbelief (incredulity) as to how God will create. For instance: It Is Unfathomable That a Loving Higher Intelligence Created the Species – Cornelius Hunter - June 2012 Excerpt: "This inescapable empirical truth is as understandable in the light of mechanistic genetic operations as it is unfathomable as the act of a loving higher intelligence. [112]" - Dr. John Avise - "Inside The Human Genome" "No one looking at the vast extent of the universe and the completely random location of homo sapiens within it (in both space and time) could seriously maintain that the whole thing was intentionally created for us.,,, ,,,If we have any understanding at all of how an intelligent agent capable of creating the material universe would act if it had such an intention, we would say it would not create the huge structure we see, most of it completely irrelevant for life on Earth, with the Earth in such a seemingly random location, and with humans appearing only after a long and rather random course of evolution." Tim Maudlin - NYU philosopher Telling Theists What They Think: Philosopher Versus Philosopher at the New York Times - David Klinghoffer - June 19, 2014 http://www.evolutionnews.org/2014/06/telling_theists086931.html even 'Origin of Species' is full of such sophomoric Theistic reasoning: Charles Darwin, Theologian: Major New Article on Darwin's Use of Theology in the Origin of Species - May 2011 Excerpt: The Origin supplies abundant evidence of theology in action; as Dilley observes: I have argued that, in the first edition of the Origin, Darwin drew upon at least the following positiva theological claims in his case for descent with modification (and against special creation): 1. Human begins are not justfied in believing that God creates in ways analogous to the intellectual powers of the human mind. 2. A God who is free to create as He wishes would create new biological limbs de novo rather than from a common pattern. 3. A respectable deity would create biological structures in accord with a human conception of the 'simplest mode' to accomplish the functions of these structures. 4. God would only create the minimum structure required for a given part's function. 5. God does not provide false empirical information about the origins of organisms. 6. God impressed the laws of nature on matter. 7. God directly created the first 'primordial' life. 8. God did not perform miracles within organic history subsequent to the creation of the first life. 9. A 'distant' God is not morally culpable for natural pain and suffering. 10. The God of special creation, who allegedly performed miracles in organic history, is not plausible given the presence of natural pain and suffering. http://www.evolutionnews.org/2011/05/charles_darwin_theologian_majo046391.html In this following video Dr. William Lane Craig is surprised to find that evolutionary biologist Dr. Ayala uses theological argumentation in his book to support Darwinism and invites him to present evidence, any evidence at all, that Darwinism can do what he claims it can: Refuting The Myth Of 'Bad Design' vs. Intelligent Design - William Lane Craig - video http://www.youtube.com/watch?v=uIzdieauxZg atheists have their theology, which is basically: "God, if he existed, wouldn't do it this way (because) if I were God, I wouldn't (do it that way)." http://www.evolutionnews.org/2014/05/creationists_th085691.html Dr. Seuss Biology | Origins with Dr. Paul A. Nelson - video http://www.youtube.com/watch?v=HVx42Izp1ek etc.. etc...bornagain77
June 22, 2014
June
06
Jun
22
22
2014
02:07 PM
2
02
07
PM
PDT
Re #15, your incredulity is still not an argument. RoyRoy
June 22, 2014
June
06
Jun
22
22
2014
01:35 PM
1
01
35
PM
PDT
“incredulity is not an argument” Does this include the incredulity of atheists towards God?
Yes. But since I have never seen anyone claim that their disbelief in any god is an argument, I can see no point to your question other than to distract from the impotence of the article above. RoyRoy
June 22, 2014
June
06
Jun
22
22
2014
01:24 PM
1
01
24
PM
PDT
How difficult would it be to create the connectome of the brian.
That would be pretty much impossible (I'm assuming that you meant "brain"). But that might not matter. Consider the case of cochlear implants. The brain itself works out how to complete the wiring. And it cannot have been in the DNA, because cochlear implants are different from what is in the normal ear.Neil Rickert
June 22, 2014
June
06
Jun
22
22
2014
12:03 PM
12
12
03
PM
PDT
Why is making billions of copies of the same protein that much more information heavy than just one? It would take a lot fewer than 60 megabytes of code to jury-rig Nethack into creating 200 billion trillion copies of the Mazes of Menace, but I don’t think you would consider that a comparable feat. And anyway, 60 megabytes is a lot of information, while 200 billion trillion molecules is only about a third of a mole.
You miss the point, it is coordinating the copies to make systems. It's easy to spew billions of copies of English alphabetic letters by a random copy algorithm, it is considerably more difficult to arrange the copies of the letters into a compelling novel. We are led to think that just because we can make a protein, that it is an easy step to integrate them into a living organism. Not so. If it were that easy, we could put frogs in blenders and expect something incredibly integrated to come out as result. :-)
And anyway, 60 megabytes is a lot of information, while 200 billion trillion molecules is only about a third of a mole.
You're confusing units. Bytes of information are not measured in moles. :roll:scordova
June 22, 2014
June
06
Jun
22
22
2014
11:40 AM
11
11
40
AM
PDT
I most certainly think there is a tremendous compression algorithm to store the information in the DNA. Also I don't think the necessary information is contained only in the DNA but implicitly in the rest of the cell. Something can represent information without necessarily being readable. A car represents a lot of information even though it is not readable like computer memory or DNA. In like manner the machinery of a cell is an informed structure without which life would cease. There is a lot of implicit information in the rest of the cell, and not only would the DNA have to evolve, but also the accompanying hardware. I believer there is data compression in the cell, and the compression would entail algorithmic compression, but compression algorithms entail non-trivial design. There are the conflicting requirements of compactness (80 megabytes, 800 megabytes...who knows) and robustness (redundancy and fault tolerance). How these conflicting requirements can be optimally met is incredible. I'm just saying, from an engineering standpoint, if we gave the task to a team of engineers today, I'd think many would find it impossible with known technology. The only reason they'd find it theoretically feasible is that happens in the biological world already. Let's say for the sake of argument the information to build a human can be stored in 80 megabytes. The skill of the designer would be exceptional beyond known human ability, certainly far more than what would be expected from a few hundred million iterations of the random walk of neutral molecular evolution.scordova
June 22, 2014
June
06
Jun
22
22
2014
11:07 AM
11
11
07
AM
PDT
SalC: I think much of your answer lies here:
If I were a designer and had only 800 megabytes of DNA to specify something as complex as a human, I would think I’d have to store lots of assembly instructions outside the DNA and implement some of the manufacturing process instructions outside of the DNA. If some of the manufacturing process instructions are implicitly stored outside of DNA, then that is yet more information we have to account for, and if we include that, I think it strains credulity that a mindless process could construct something as complex as a human.
First, switch selection [sequences . . .] on a case structure can do a huge amount of compression. Beyond that there is obviously a LOT of interwoven coding . . . which I was ever so grateful I never had to try that in doing micro design. Next, much of the dismissiveness on alleged junk is probably unwarranted. Finally, such are signs of how much more we have to understand. KF PS: "Knockout and it still works" is usually a sign of graceful degradation fail-safes and maybe of rarely used defensive code. A system designed to go from is it 1 cell to 12 trillions then operate for at least 50 - 80 more years is going to have to cover a lot of unusual contingencies that don't readily show up. But as they say when you need such you need it bad. Tough luck if it's been knocked out as "useless."kairosfocus
June 22, 2014
June
06
Jun
22
22
2014
10:41 AM
10
10
41
AM
PDT
Some will argue we can knockout a lot a cell and it still functions,...
"...and it still functions" ? Really? Exactly the same way? Exactly in all the same circumstances? Is this an absolute statement? How can we know if it still functions, if we don't know exactly how it functions or what exactly it does? We had this situation many times, where the certification guys tested the product before it was released, but later the users would report bugs that went undetected through the "thorough" testing procedures we had implemented. Some oddball cases had been left untested. Oops! Some users ran the program in those untested conditions and the program failed or gave some results that seemed correct but really weren't. Years of experience in software development for engineering design applications taught me a few lessons.Dionisio
June 22, 2014
June
06
Jun
22
22
2014
09:24 AM
9
09
24
AM
PDT
Now if Dan Graur and Larry Moran’s numbers are to be believed, then we’re getting a lot of mileage out of a mere 80 megabytes.
At this point, I would not be so concerned about any amount of megabytes or gigabytes, while we still have so much to research and understand, in terms of specific mechanisms, all interrelated, behind the mind-boggling choreographies and orchestrations that researchers are discovering these days. I'd rather be more concerned about the detailed description of the mechanisms. But first we need to understand them well. If one reads the posts # 9 through # 98 in the thread https://uncommondescent.com/evolution/a-third-way-of-evolution/, one could realize how fast things are being discovered these days, but at the same time, how many things are poorly understood, if at all. Scientific research should continue at intensive pace, in the right direction, open-minded, questioning everything, so more light can be shed on the complex systems being studied. The data avalanche should be processed correctly, to extract as much information as possible. Then detailed descriptions should be written. Is there a better alternative? This is why I look forward, with much anticipation, to reading more reports coming out of the research labs. In my case, at this point, I'm mainly interested in the mechanisms behind the processes occurring during the first few days of human embryonic development. That's enough to keep my very busy and sleepless for quite a while. Just trying to gather all that information is an exhausting task. Then digesting it requires a tremendous effort. Once it is well understood, it shouldn't be that difficult to describe it in details. Actually, it is fun to reach that stage. These are indeed exciting times to watch what is going on in science. We may want to encourage more young people to pursue exciting careers in science research and support those who have taken that path already.Dionisio
June 22, 2014
June
06
Jun
22
22
2014
09:10 AM
9
09
10
AM
PDT
Ha! Thanks Dionisio!Bateman
June 22, 2014
June
06
Jun
22
22
2014
08:19 AM
8
08
19
AM
PDT
How difficult would it be to create the connectome of the brian. From the organization that was founded by Bill Gates who marveled at the great complexity of biological software:
http://research.microsoft.com/en-us/collaboration/fourthparadigm/4th_paradigm_book_part2_lichtman_cohen.pdf To get a sense of the scale of the problem, consider the cerebral cortex of the human brain, which contains more than 160 trillion synaptic connections. These connections originate from billions of neurons. Each neuron receives synaptic connections from hundreds or even thousands of different neurons, and each sends information via synapses to a similar number of target neurons. This enormous fan-in and fan-out can occur because each neuron is geometrically complicated, possessing many receptive processes (dendrites) and one highly branched outflow process (an axon) that can extend over relatively long distances. One might hope to be able to reverse engineer the circuits in the brain. In other words, if we could only tease apart the individual neurons and see which one is connected to which and with what strength, we might at least begin to have the tools to decode the functioning of a particular circuit. The staggering numbers and complex cellular shapes are not the only daunting aspects of the problem. The circuits that connect nerve cells are nanoscopic in scale. The density of synapses in the cerebral cortex is approximately 300 million per cubic millimeter. Functional magnetic resonance imaging (fMRI) has provided glimpses into the macroscopic 3-D workings of the brain. However, the finest resolution of fMRI is approximately 1 cubic millimeter per voxel—the same cubic millimeter that can contain 300 million synapses. Thus there is a huge amount of circuitry in even the most finely resolved functional images of the human brain. Moreover, the size of these synapses falls below the diffraction-limited resolution of traditional optical imaging technologies. Circuit mapping could potentially be amenable to analysis based on color coding of neuronal processes [1] and/or the use of techniques that break through the diffraction limit [2]. Presently, the gold standard for analyzing synaptic connections is to use electron microscopy (EM), whose nanometer (nm) resolution is more than sufficient to ascertain the finest details of neural connections. But to map circuits, one must overcome a technical hurdle: EM typically images very thin sections (tens of nanometers in thickness), so reconstructing a volume requires a “serial reconstruction” whereby the image information from contiguous slices of the same volume is recomposed into a volumetric dataset. There are several ways to generate such volumetric data (see, for example, [3-5]), but all of these have the potential to generate astonishingly large digital image data libraries, as described next. Some Numbers If one were to reconstruct by EM all the synaptic circuitry in 1 cubic mm of brain (roughly what might fit on the head of a pin), one would need a set of serial images spanning a millimeter in depth. Unambiguously resolving all the axonal and dendritic branches would require sectioning at probably no more than 30 nm. Thus the 1 mm depth would require 33,000 images. Each image should have at least 10 nm lateral resolution to discern all the vesicles (the source of the neurotransmitters) and synapse types. A square-millimeter image at 5 nm resolution is an image that has ~4 x1010 pixels, or 10 to 20 gigapixels. So the image data in 1 cubic mm will be in the range of 1 petabyte (250 ~ 1,000,000,000,000,000 bytes). The human brain contains nearly 1 million cubic mm of neural tissue.
Now if Dan Graur and Larry Moran's numbers are to be believed, then we're getting a lot of mileage out of a mere 80 megabytes. Amazing that 80 megabytes could specify the organization and manufacturing of such a nano wonder as the brain, let alone the immune system and the rest of the body that exceeds the abilities of even the gigantic computer factories on the planet. :cool: As I said, if you think we can compress all this information in 80 megabytes than this implies an ingenious compression algorithm, or alternatively there whole starting cell of human at conception is information, or maybe its all information and ingeniously compressed. In any case, a genius designer is implicated. Some will argue we can knockout a lot a cell and it still functions, but that may evidence the information is also redundantly and robustly stored, not that it is junk!scordova
June 22, 2014
June
06
Jun
22
22
2014
08:06 AM
8
08
06
AM
PDT
Bateman,
I type from an iPhone; please forgive the formatting.
No, that's unacceptable! Just kidding ;-) From a smartphone? I couldn't have written it better even using the best word processing software on a 17" laptop.Dionisio
June 22, 2014
June
06
Jun
22
22
2014
08:01 AM
8
08
01
AM
PDT
Acartia_bogart:
If their are 20,000 coding genes, there are 20,000 unique proteins, not 200 billion trillion.
That is incorrect. Alternative gene splicing can produce many proteins from one gene. And we know that humans have more proteins than genes.Joe
June 22, 2014
June
06
Jun
22
22
2014
07:44 AM
7
07
44
AM
PDT
Neil Rickert:
The genes do not need to specify the final adult form, nor do they need to specify all of the intermediate development forms. Instead, they need to specify a development program.
They don't even do that. Genes influence development but they do not determine what develops.
Two identical twins are not actually identical.
They have differences in DNA.Joe
June 22, 2014
June
06
Jun
22
22
2014
07:42 AM
7
07
42
AM
PDT
Wait wait wait.... Based on this discussion, can I extrapolate that we really don't completely know how DNA affects development? Is much of it unwitnessed theorization? Do we know enough to make broad philosophical conclusions? Did these philosophical conclusions come before or after the data? Just a lowly social scientist here - and my fields are chock full of unsubstantiated a priori philosophical/political assumptions, so I risk sounding hypocritical, but: how can one commit to a life-altering philosophy based solely on incomplete data then reapply that bias to the development of more useless data? If scientific purity is the goal, shouldn't the materialist and theist alike throw off the assumptions in the lab and go where the evidence takes them? What makes materialism the "scientific default" philosophical assumption? I am willing to go the route of "data first" (or "sola evidentia?"); I am quite sure it leads to theism as well. See Antony Flew: "Antony Flew: There were two factors in particular that were decisive. One was my growing empathy with the insight of Einstein and other noted scientists that there had to be an Intelligence behind the integrated complexity of the physical Universe. The second was my own insight that the integrated complexity of life itself—which is far more complex than the physical Universe—can only be explained in terms of an Intelligent Source. I believe that the origin of life and reproduction simply cannot be explained from a biological standpoint despite numerous efforts to do so. With every passing year, the more that was discovered about the richness and inherent intelligence of life, the less it seemed likely that a chemical soup could magically generate the genetic code. The difference between life and non-life, it became apparent to me, was ontological and not chemical. The best confirmation of this radical gulf is Richard Dawkins' comical effort to argue in The God Delusion that the origin of life can be attributed to a 'lucky chance.' If that's the best argument you have, then the game is over. No, I did not hear a Voice. It was the evidence itself that led me to this conclusion." http://www.strangenotions.com/flew/ Flew looked at the evidence that we have, not the evidence that we theorized to exist. Perhaps I need more coffee this morning, but I can not see any reason one can decisively conclude on materialism as a default ethic for science. Dionisio's point is related; and BA77 quoted Turek at a good time; it takes wishful thinking (I don't like Turek's use of the word faith) to assume materialism before ingesting the data. I type from an iPhone; please forgive the formatting. And I apologize for the off topic wandering.Bateman
June 22, 2014
June
06
Jun
22
22
2014
07:33 AM
7
07
33
AM
PDT
Your incredulity is not an argument.
Well if you want to believe 80 megabytes is sufficient to implement a brain as capable as the ones which Albert Einstein or Alan Turing or Claude Shannon had, you are welcome to do so. Whether I dis-believe 80 megabytes or not it is irrelevant, but whether the numbers make it feasible is the issue. I am pointing out the implication of Dan Graur and Larry Moran numbers -- are they arguing their brains are about at capable as an 80 megabyte computer will allow? The problem is if only 80 megabytes are used, then this shows incredible programming skill that non of our best computer scientists could ever conceive of. If it is not 80 megabytes, if the whole DNA genome plus the a sizeable epigeneitic storehouse of implicit information is involved, then that also evidences ingenious architecture since it means a lot of information an technology is inovled. But even then, if a human zygote has 10 billion physical proteins, that still seems a little small in terms of storage ability (assuming somehow proteins are information bearing). It suggest an ingenious information architecture that can so economically represent something as complex as a human brain and body.scordova
June 22, 2014
June
06
Jun
22
22
2014
07:18 AM
7
07
18
AM
PDT
Here's a challenge to anyone and their cousins: Open this thread https://uncommondescent.com/evolution/a-third-way-of-evolution/ and try to resolve the issues raised in the comments starting @ post # 9. If you're just interested in philosophical discussions, then you may join the 3rd. way in their OOL research. If you're more concerned about the technical "where's the beef?" part, then join the researchers that work hard trying to understand the currently known systems. Just analyze the data avalanche coming out of the labs and figure out what they mean. I will highly appreciate if you share your findings, so I can use them in my project. I promise to give due credit to the source of the information. The ultimate goal is to describe in detail the whole enchilada. Have fun!Dionisio
June 22, 2014
June
06
Jun
22
22
2014
07:03 AM
7
07
03
AM
PDT
Of related note: How many different cells are there in complex organisms? Excerpt: The nematode worm Caenorhabditis elegans, the cellular ontogeny of which has been precisely mapped, has 1,179 and 1,090 distinct somatic cells (including those that undergo programmed cell death) in the male and female, respectively, each with a defined history and fate. Therefore, if we take the developmental trajectories and cell position into account, C. elegans has 10^3 different cell identities, even if many of these cells are functionally similar. By this reasoning, although the number of different cell types in mammals is often considered to lie in the order of hundreds, it is actually in the order of 10^12 if their positional identity and specific ontogeny are considered. Humans have an estimated 10^14 cells, mostly positioned in precise ways and with precise organization, shape and function, in skeletal architecture, musculature and organ type, many of which (such as the nose) show inherited idiosyncrasies. Even if the actual number of cells with distinct identities is discounted by a factor of 100 (on the basis that 99% of the cells are simply clonal expansions of a particular cell type in a particular location or under particular conditions (for example, fat, muscle or immune cells)), there are still 10^12 positionally different cell types. http://ai.stanford.edu/~serafim/CS374_2006/papers/Mattick_NRG2004.pdf Cell Positioning Uses "Good Design" - March 2, 2013 Excerpt: All in all, we see a complex answer to a simple question: how does a cell know where it is? Here we have seen multiple interacting mechanisms for gathering information from a noisy environment, refining it, and making decisions reliably. This is a form of irreducible complexity -- not so much of physical parts interacting, but strategies interacting, much like a software engineer would use multiple strategies to provide robustness for high-reliability software. Cells are so good at it, they gain "exceedingly reliable" information even from noisy, unreliable inputs.,, "In biology, simple questions rarely have simple answers, and "how do cells know where they are?" is no exception.",,, Lander says nothing about how these sensory strategies might have evolved by a Darwinian process. Indeed, Darwinian theory is essentially useless to the entire discussion.,,, http://www.evolutionnews.org/2013/03/cell_positionin069471.htmlbornagain77
June 22, 2014
June
06
Jun
22
22
2014
06:45 AM
6
06
45
AM
PDT
Most comments in the thread https://uncommondescent.com/evolution/a-third-way-of-evolution/ are about a few specific biological mechanisms that have not been fully described yet. Please, note that most comments in the indicated thread deal with a narrow area of biology, mainly the cell fate determination mechanisms, though some comments may refer to other biological systems too. Hence most comments have to do with a subset of the entire biological enchilada. How much ‘code’ would it be required in order to simulate their functionality in a 4D dynamic model system implemented in a computer? Perhaps this hasn’t been done, because a complete description isn’t available yet? How difficult would it be to develop a comprehensive 4D simulation system before one has the entire picture well described in the programming tech specs? Would it make sense to develop an incomplete simulation system that could be incrementally adjusted and enhanced as more information about the actual functioning of the real system becomes available later?
There are many biological mechanisms that are not fully understood yet, hence not well described either. Most comments in the thread https://uncommondescent.com/evolution/a-third-way-of-evolution/ , specially starting at post #9, have to do with a very small subset of the currently known mechanisms in biological systems. Can't think of simulating the whole human system, if so much of it can't be properly described yet. First things first. The amount of megabytes or gigabytes is a rather irrelevant issue, when one is trying to understand a system in order to describe it in detail for simulation purposes. Everyone is kindly invited to help with describing the issues raised by the comments in the above referred thread. Some discussions quickly turn into philosophical arguments between opposite irreconcilable worldview positions. Sometimes that could be a waste of time, unless the main purpose is to attract visitors. In the case of this simulation software development, most information used for writing programming tech specs usually comes from reports produced by the researchers and published in specialized online media. Those reports are used by teams of scientists/engineers/software analysts in order to write the required system description, that eventually can be translated to detailed technical specifications. Pretty simple.Dionisio
June 22, 2014
June
06
Jun
22
22
2014
06:44 AM
6
06
44
AM
PDT
1 2

Leave a Reply