Uncommon Descent Serving The Intelligent Design Community

Why describing DNA as “software” doesn’t really work

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
File:DNA simple.svg

Check out Science Uprising 3. In contemporary culture, we are asked to believe – in an impressive break with observed reality – that the code of life wrote itself:

… mainstream studies are funded, some perhaps with tax money, on why so many people don’t “believe in” evolution (as the creation story of materialism). The fact that their doubt is treated as a puzzling public problem should apprise any thoughtful person as to the level of credulity contemporary culture demands in this matter.

So we are left with a dilemma: The film argues that there is a mind underlying the universe. If there is no such mind, there must at least be something that can do everything that a cosmic mind could do to bring the universe and life into existence. And that entity cannot, logically, simply be one of the many features of the universe.

Yet, surprisingly, one doesn’t hear much about mainstream studies that investigate why anyone would believe an account of the history of life that is so obviously untrue to reason and evidence.Denyse O’Leary, “There is a glitch in the description of DNA as software” at Mind Matters News

Maybe a little uprising wouldn’t hurt.

Here at UD News, we didn’t realize that anyone else had a sense of the ridiculous. Maybe the kids do?

See also: Episode One: Reality: Real vs. material

and

Episode Two: No, You’re Not Robot made of Meat

Notes on previous episodes

Seven minutes to goosebumps (Robert J. Marks) A new short film series takes on materialism in science, including that of AI’s pop prophets

Science Uprising: Stop ignoring evidence for the existence of the human mind Materialism enables irrational ideas about ourselves to compete with rational ones on an equal basis. It won’t work (Denyse O’Leary)

and

Does vivid imagination help “explain” consciousness? A popular science magazine struggles to make the case. (Denyse O’Leary)

Further reading on DNA as a code: Could DNA be hacked, like software? It’s already been done. As a language, DNA can carry malicious messages

and

How a computer programmer looks at DNA And finds it to be “amazing” code

Follow UD News at Twitter!

Comments
DaveS: "My compliments to gpuccio. His detailed and very clear explanation makes the abstract to the Szostak paper comprehensible." OK, so I share that with Szostak. Good, so I will feel less a "bad guy" each time I criticize his paper about the ATP binding protein (and, unfortunately, that happens quite often here! :) )gpuccio
June 22, 2019
June
06
Jun
22
22
2019
09:03 AM
9
09
03
AM
PDT
DaveS: Yes, as said Szostak's definition of functional information is the same as mine. The null hipothesis has a fundamental role in inferring design from functional information, not in the definition of functional information itself. For obvious reasons, Szostak does not use the concept of functional information to infer design. That's why you don't see any mention of the null hypothesis in his paper. But functional information above a certain threshold is a safe marker of design, and allows to infer design as the process which originated the configuration we are observing. Of course, that can be demontrated separately. Up to now, the discussion was about the definition of functional information and its measurement, so I have sticked to that.gpuccio
June 22, 2019
June
06
Jun
22
22
2019
09:01 AM
9
09
01
AM
PDT
John_a_designer at #98: Szostak's definition is essentially the same as mine. Of course the function is defined in a context. There is no problem with that. However, the functional information corresponds to the minimal number of bits necessary to implement the function. The function definition will include the necessary context. For example, helicase will be defined as a protein that can "separate two annealed nucleic acid strands (i.e., DNA, RNA, or RNA-DNA hybrid) using energy derived from ATP hydrolysis" (from Wikipedia), of course in cells with nucleic acids and ATP.gpuccio
June 22, 2019
June
06
Jun
22
22
2019
08:56 AM
8
08
56
AM
PDT
DaveS at #97:
Is there a benefit to stating all this in terms of functional information? In order to do that, you have to estimate the relative frequency with which this function would occur naturally and the total number of “trials” that have occurred (10^-20 and 10^9), so you already know the chance of greater than zero functional trials is miniscule (assuming the null), hence the null hypothesis is likely false. Why not just stop there?
I am not sure I understand your point. Functional information and its measurement are essential to infer design. We can infer design when the functional information is high enough, in relation to the probabilistic resources of the system. Where should we "stop"? We stop when, after having measured the functional information for some function, and finding it high enough (for example, more than 500 bits), we infer design for the object. That was the purpose from the beginning, wasn't it?gpuccio
June 22, 2019
June
06
Jun
22
22
2019
08:51 AM
8
08
51
AM
PDT
Jad@98, thank you. That makes it clearer. And matches up with what I thought functional information is.Brother Brian
June 22, 2019
June
06
Jun
22
22
2019
07:37 AM
7
07
37
AM
PDT
The paper that JAD posted might answer my last question to gpuccio:
For a given system and function, x (e.g., a folded RNA sequence that binds to GTP), and degree of function, E_x (e.g., the RNA–GTP binding energy), I(E_x) = −log_2[F(E_x)], where F(E_x) is the fraction of all possible configurations of the system that possess a degree of function ≥ E_x.
And this definition is very similar to the one gpuccio illustrated above. I don't see any dependence on the null hypothesis we discussed above (absence of design), however [Edit: Perhaps it's implicit?]. Would this matter? I guess the denominator in Szostak's version is simply the total number of possible configurations of the system, period, not the total number of configurations that are "reachable" through natural processes. ETA: My compliments to gpuccio. His detailed and very clear explanation makes the abstract to the Szostak paper comprehensible.daveS
June 22, 2019
June
06
Jun
22
22
2019
07:21 AM
7
07
21
AM
PDT
A 2007 paper published in PNAS published by Jack Szostak and his colleagues defines functional information this way: “Functional information is defined only in the context of a specific function x. For example, the functional information of a ribozyme may be greater than zero with respect to its ability to catalyze one specific reaction but will be zero with respect to many other reactions. Functional information therefore depends on both the system and on the specific function under consideration. Furthermore, if no configuration of a system is able to accomplish a specific function x [i.e., M(Ex) = 0], then the functional information corresponding to that function is undefined, no matter how structurally intricate or information-rich the arrangement of its agents.” https://www.pnas.org/content/104/suppl_1/8574 Take for example, a bike sprocket. Without a system, the bicycle, the sprocket has no function. However, it still has a potential function and a purpose. If we find a sprocket in a warehouse next to a factory where they assemble bicycles we could quickly deduce what the purpose of the sprocket is. In other words, it still has a purpose defined by its potential function. I was trying to make a similar point above at #44 when I talked about helicase and DNA replication. https://uncommondescent.com/intelligent-design/why-describing-dna-as-software-doesnt-really-work/#comment-679003 What is the function of helicase without the DNA helix? It has no other function. So it is highly specified.john_a_designer
June 22, 2019
June
06
Jun
22
22
2019
07:10 AM
7
07
10
AM
PDT
gpuccio, Is there a benefit to stating all this in terms of functional information? In order to do that, you have to estimate the relative frequency with which this function would occur naturally and the total number of "trials" that have occurred (10^-20 and 10^9), so you already know the chance of greater than zero functional trials is miniscule (assuming the null), hence the null hypothesis is likely false. Why not just stop there?daveS
June 21, 2019
June
06
Jun
21
21
2019
09:10 PM
9
09
10
PM
PDT
Brother Brian:
And I always thought standardization was a communication issue.
Context. You know that word that you refuse to understand. Quote-mining, on the other hand, is something that you do quote well. The sentences after the one that you so cowardly quote-mined should have been explanation enough for someone who allegedly makes a living in the standardization field. But that is moot as the programmer was using a standard, just the wrong one. Hence the communication issue.ET
June 21, 2019
June
06
Jun
21
21
2019
06:56 PM
6
06
56
PM
PDT
ET
I would say the Mars orbiter problem was a communication issue and not a standardization issue.
Silly me. And I always thought standardization was a communication issue. But what would I know? I only make a living in the standardization field.Brother Brian
June 21, 2019
June
06
Jun
21
21
2019
03:33 PM
3
03
33
PM
PDT
I would say the Mars orbiter problem was a communication issue and not a standardization issue. If the contract called for one thing and something else was delivered, that is a sign of a communication breakdown. But it does show how critical complex specified and functional information can be .ET
June 21, 2019
June
06
Jun
21
21
2019
03:03 PM
3
03
03
PM
PDT
timothya @ 91- No one here asked for evidence that unit standardization a good thing.ET
June 21, 2019
June
06
Jun
21
21
2019
12:47 PM
12
12
47
PM
PDT
daveS
Oddly enough, I’m very intrigued by these sorts of events. I think I would find this evidence most convincing, if I could witness it myself.
That is very good to hear and it tells me that you are open to the evidence, at least in this case, through direct experience. There are somewhat living artifacts or testimonies of design in the tilma of Guadaluope and the shroud. It's not a direct experience of the events, but at least artifacts that can be observed. In both cases, some inference must be drawn about the origin of both. I think the miracle of the sun is very difficult to explain from a materialist perspective, even though it is an historical event that is subject to that kind of analysis. The stigmata of St. Pio, for example, is documented with photos. But even here, there is always some room for denial. To me, they're strong evidences of design, but as you said previously, there's nothing that makes an absolute statement which is completely undeniable. I see that as part of the designer's methodology. Others think it is a weakness of a design perspective that there is never found anything like a Shakespeare sonnet written in Morse Code in tree rings.Silver Asiatic
June 21, 2019
June
06
Jun
21
21
2019
12:24 PM
12
12
24
PM
PDT
Joe asks for evidence that unit standardisation a good thing. Here is a negative example: https://en.m.wikipedia.org/wiki/Mars_Climate_Orbitertimothya
June 21, 2019
June
06
Jun
21
21
2019
12:22 PM
12
12
22
PM
PDT
DaveS: Yes, of course. The absence of design is the null hypothesis.gpuccio
June 21, 2019
June
06
Jun
21
21
2019
12:07 PM
12
12
07
PM
PDT
gpuccio, Ok, I might be getting it. Would it be correct to say this functional information measure is always relative to a "null hypothesis" (in this case that the 1-liter solid was produced by natural processes on this 1-billion stone beach)?daveS
June 21, 2019
June
06
Jun
21
21
2019
10:24 AM
10
10
24
AM
PDT
DaveS: The meaning of the value in bits is more intuitive when we are dealing with digital information in native form. However, the meaning is always the same: -log2 of the ratio between target space and search space.gpuccio
June 21, 2019
June
06
Jun
21
21
2019
10:06 AM
10
10
06
AM
PDT
DaveS at #85: It is the level of minimum precision that I have to get to have that exact volume. Of course it is not a true measure. I have derived it from the hypothesis that such a precise volume could be attained by chance in that system only once in 10^20 attempts. 1:10^20 = 10^-20 -log2(10^-20) = about 66 bitsgpuccio
June 21, 2019
June
06
Jun
21
21
2019
10:00 AM
10
10
00
AM
PDT
DaveS at #83: It is also interestying to consider that the functional information when it was designed was relative to its properties at the moment of its design (for example, to correspon rather well to the weight of onw liter of water). Its use as a reference after it was built is something "after the fact", so it has nothing to do with the functional information. Remember, the functional informatio measures the bits that have to be configured to make an object able to perform a pre-defined, independent function. If we take a random stone and decide that it will be the new kilogram fron now on, whatever its weight, there is no functional information in the object: we are just using it for a functiona that we define using the configuraion of the objects itself. The new function is designed, but not the original object. Functional information means that the designer has to configure exactly a number of bits because, otherwise, the independent function cannot be implemented by the object. Natural causes can generate some functional information, but itis always very low, in the range of what probabilistic resources can do. That's why paperweights abound in nature without any need for design, but watches and computers don't.gpuccio
June 21, 2019
June
06
Jun
21
21
2019
09:57 AM
9
09
57
AM
PDT
gpuccio, Thanks for the responses. My question then becomes, how does the number 66 bits quantify the amount of information needed to implement the function? If I actually wanted to create a solid object which displaces 1 liter, how does 66 bits fit into the design and/or construction process?daveS
June 21, 2019
June
06
Jun
21
21
2019
09:49 AM
9
09
49
AM
PDT
DaveS at #83: Its functional information is linked to the way it was designed at the beginning, to satisfy certain requirements. It has not changed. Only its use has changed now, but that has nothing to do with the specific configuration that was given to the object when it was designed.gpuccio
June 21, 2019
June
06
Jun
21
21
2019
09:49 AM
9
09
49
AM
PDT
Brother Brian, Regarding the kilogram example, I think it has changed in a sense. In the past, the kilogram was by definition exactly the mass of this object. It was correct to infinitely many decimal places, so to speak. Now its mass is just very close to 1 kg (and the error varies as atoms occasionally fly off it). Perhaps that means its functional information has changed.daveS
June 21, 2019
June
06
Jun
21
21
2019
09:40 AM
9
09
40
AM
PDT
DaveS at #71:
If that’s directed toward me, then of course no one said otherwise. gpuccio says that “any possible function will do”, which I understand as implying that we should be able to calculate the functional information required to implement the paperweight function I described.
Of course that's correct. I think I have shown the procedure. A real computation requires defining a system and time window, and a precise functional definition. And, of course, some real work.gpuccio
June 21, 2019
June
06
Jun
21
21
2019
09:29 AM
9
09
29
AM
PDT
DaveS at #65:
Now to be fair, gpuccio would likely respond by asking for the more information about the specific function. For example, perhaps this paperweight needs to be able to hold down a stack of twenty A4 sheets of paper (say 80 gsm) in a 10 km/hr breeze. I would be curious to see if anyone can come up with a number in bits.
I hope my previous posts have clarified my views about that.gpuccio
June 21, 2019
June
06
Jun
21
21
2019
09:26 AM
9
09
26
AM
PDT
Brother Brian at #63:
you mention that the same object can have different functions, but does that mean that it has more than one measure of functional information.
That's perfectly correct. see my example in the previous post (laptop computer used as a paperweight).
For example, the artifact that was used as the standard kilogram for over a century surely has a tremendous amount of functional information. But as of a few months ago, it is little more than a paper weight. Has it lost its functional information?
Absolutely not. It was designed to correspond to a very precise level to be used as reference. That has been true up to May 20, 2019. Now the standard has changed, but it does not change the function of the previous object, which has been used for a lot of time. And it has a rather precise corresponence to the mass of one liter of water, anyway. So, nothing has changed about its functional information.gpuccio
June 21, 2019
June
06
Jun
21
21
2019
09:24 AM
9
09
24
AM
PDT
daves and gpuccio: Specification- that would be the specification the paper weight needed to meet. For example, perhaps this paperweight needs to be able to hold down a stack of twenty A4 sheets of paper (say 80 gsm) in a 10 km/hr breeze. We would also have to know if the stack had to be held down such that the papers don't bend or get damaged. So a stone used as a paper weight would have some functional information. But that functional information is imparted by the person who wants to meet some criteria, such as the above specification. The stone wasn't necessarily designed, unless it had to be cut and shaped, but its function was. (I was typing when gpuccio posted 78. I agree with 78)ET
June 21, 2019
June
06
Jun
21
21
2019
09:19 AM
9
09
19
AM
PDT
DaveS and ET: About paperweights: Of course a paperweight has low functional information: if it is defined without great precision (no great precision is necessary, I would say), then a lot of possible objects qualify. Let's say that we only need a solid object weighing something between 1 and 2 kilograms. In one of my OPs, I have used the paperweight function to illustrate an object with two different functions and two differen values of functional information. A laptop can be used as a paperweight and as a computer. As a paperweight, its functional information is very low. As a computer, it is extremely high. Is that clear?gpuccio
June 21, 2019
June
06
Jun
21
21
2019
09:15 AM
9
09
15
AM
PDT
DaveS at #59 (and others): So, let's go to your second question:
How much functional information is required to construct a mechanism which rotates a small metal shaft at a rate of 1 rotation per hour (i.e., a very simple clock)?
Again we can apply, in principle, the method described. We need a system where such an object could arise without design in some time window, and then we have to compute the probability of such an object arising (of course the function must be defined with precision) by chance, IOWs the ratio between spontaneous objects that would exhibit the function and the total number of objects generated in the system. I will not try to compute that for any system, but I would say that, if the function is defined with high precision, the probability will be really low. For a whole watch, I would rather blindly accept Paley's inference of design in any case and system.gpuccio
June 21, 2019
June
06
Jun
21
21
2019
09:11 AM
9
09
11
AM
PDT
DaveS at #59 (and others): Good questions from you and from others. I will try to answer them, in some order I hope. I think my answers will be useful to the general debate, so I invite all who are interested to read this post and those immediately following, whoever they are addressed to. So, your first question:
Should any nontriviality conditions be imposed on the concept of “function”? For example, I could ask how much functional information is necessary to implement a paperweight. How about “a solid object which displaces 1 liter of air”? These functions are obviously uninteresting, but it would seem under your definition, they should each possess (or specify?) a well-defined amount of functional information.
No. No conditions at all are imposed to the concept of function. Anybody can define any function he likes, and the functional complexity can be assessed (at least in principle, it is not always easy) for each of them. It is not important if the function is interesting or not. Usually, uninteresting functions will have low functional complexity, as I will try to show. I list again the only rules that must always be respected in defining a function. They are not "conditions", just obvious procedures that must be followed to have the right result: 1) We can never define a function using the specific value of the bits already observed in the object. In that case, we use the (generic) information observed in the object and we use it to define an ad hoc function. That is obviously wrong. See my example of the safe, in post #53. 2) While the function is defined by an observer, it must be defined explicitly, so that it becomes an objective reference for anybody. There must be no ambiguity in the definition. 3) Included in the definition there must be a level that defines the function as present or absent. IOWs, we must be able to assess potential objects as exhibiting the function or not, in some objective way, direct or indirect. 4) All reasonings and measurements of funtional complexity are never done abstractly. To be useful in inferring design, they must always refer to some specified system, time window, and so on. To see that, let's try to apply those principles to your example: “A solid object which displaces 1 liter of air” This is a perfectly valid function definition, but incomplete. We need to know the system, the time window, and the level of precision to assess function. So, let's say we have a beach with approximately 1 billion stones, formed apparently by natural laws operating in that system in one million years. We observe a stone whose volume is one liter. Is it designed, or not? Let's say we define our function as "having a volume of one liter with a precision of one part in a million". OK, that is more complete. First of all, the reference we are using (the liter) exists independently (we are not using the observed volume to define it). Of course, any stone could be defined as having the volume it has. In that case, we would be using the observed configuration (the volume of the observed stone) to define the function, and we know that it is not correct. But with the liter, we have not that problem. So, using our definition, we can in principle apply it to generate a binary partition on the set of all possible stones (which could include the billion we can observe, but also all those that could have been forned in the time window). However, a finite number. Our binary partition will classify all possible objects in the system as exhibiting the function or not. At this point, the ratio of all possible objects in the system exhibiting the function (the target space) to all possible object in the system (let's say a billion, the search space) is the functional complexity of our function in that system. We can try to compute that. It may not be easy, but in principle it can be done. Possible by indirect methods. The task here is more difficult because we are dealing with analoigic configurations. It's usually easier with information that is natively in digital form, like in most biological objects. Now, let's say that in some way we compute that a stone randomly generated in our system by natural causes has a probability of satisfying our definition of 1:10^20, IOWs 10^-20, IOWs about 66 bits of functional information. Now we must consider the probabilistic resources of the system. If we evaluate that about one billion stones have been generated in the system in the time window, then the probabilistic resources are about 10^9, IOWs about 30 bits. So, we have a result that has a probability of 66 bits in a system which has probabilistic resources of 30 bits. There is a global improbability of observing that result of about 36 bits. And that is something. Is that enough to infer design? Not accordign to the general, extremely conservative rule used usually in ID: 500 bits of functional information observed, whatever the probabilistic resources of the system. But, in the end, our conclusion depends on the system we are observing, the meaning of our conclusion, its generality, and so on. The 500 bits threshold is usually selected because it ensures utter improbability in practically any possible physical system in the universe, whatever the probabilistic resources. More in next post.gpuccio
June 21, 2019
June
06
Jun
21
21
2019
08:46 AM
8
08
46
AM
PDT
Brother Brian:
It was critical for a century’s worth of advances in industry, technology and commerce.
Evidence please- and don't ask me a question, just provide the reference to support your claim. Or stop making themET
June 21, 2019
June
06
Jun
21
21
2019
08:11 AM
8
08
11
AM
PDT
1 2 3 4 5

Leave a Reply