Uncommon Descent Serving The Intelligent Design Community

Fixing a Confusion

Categories
Darwinism
ID Foundations
Intelligent Design
specified complexity
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

I have often noticed something of a confusion on one of the major points of the Intelligent Design movement – whether or not the design inference is primarily based on the failure of Darwinism and/or mechanism.

This is expressed in a recent thread by a commenter saying, “The arguments for this view [Intelligent Design] are largely based on the improbability of other mechanisms (e.g. evolution) producing the world we observe.” I’m not going to name the commenter because this is a common confusion that a lot of people have.

The reason for this is largely historical. It used to be that the arguments for design were very plain. Biology proceeded according to a holistic plan both in the organism and the environment. This plan indicated a clear teleology – that the organism did things that were *for* something. These organisms exhibited a unity of being. This is evidence of design. It has no reference to probabilities or improbabilities of any mechanism. It is just evidence on its own.

Then, in the 19th century, Darwin suggested that there was another possibility for the reason for this cohesion – natural selection. Unity of plan and teleological design, according to Darwin, could also happen due to selection.

Thus, the original argument is:

X, Y, and Z indicate design

Darwin’s argument is:

X, Y, and Z could also indicate natural selection

So, therefore, we simply show that Darwin is wrong in this assertion. If Darwin is wrong, then the original evidence for design (which was not based on any probability) goes back to being evidence for design. The only reason for probabilities in the modern design argument is because Darwinites have said, “you can get that without design”, so we modeled NotDesign as well, to show that it can’t be done that way.

So, the *only* reason we are talking about probabilities is to answer an objection. The original evidence *remains* the primary evidence that it was based on. Answering the objection simply removes the objection.

As a case in point, CSI is based on the fact that designed things have a holistic unity. Thus, they follow a specification that is simpler than their overall arrangement. CSI is the quest to quantify this point. It does involve a chance rejection region as well, but the main point is that the design must operate on principles simpler than their realization (which provides the reduced Kolmogorov complexity for the specificational complexity).

Comments
Silver Asiatic I have nothing to add or change in gpuccio's explanation @132. It's crystal clear to me. Origenes also expressed his opinion on this @133 and I agree. I thank gpuccio and Origenes for their comments. The few things I did not understand in this discussion got clarified by gpuccio's explanations. I had to reread the comments and think carefully about their meaning before I was able to understand the whole idea. The various exchanges back and forth definitely helped me. I thank you for keeping asking until things got clarified to all sides of the discussion. There are basic technical concepts and principles explained in this discussion which should remain as reference points for future discussions on this subject.Dionisio
December 11, 2016
December
12
Dec
11
11
2016
03:29 AM
3
03
29
AM
PDT
Silver Asiatic @123 My two cents:
SA: If I presented three artificts, the source of which known only to me, and I will reveal the sources: 1. A sonnet by Shakespeare 2. A poem by a contemporary poet 3. A computer-composed poem from randomized words following rules of grammar and poetics 1 – you would recognize Shakespeare already because you know the sonnets. However, you’re saying that since Shakespeare used information from the dictionary, Shakespeare is only recycling information from the dictionary, and there is no new information presented, right?
If we provide Shakespeare and ‘blind forces’ with access to a dictionary and grammar rules, then no new information at the sub-levels of letters and words is to be expected. You simply start your design inquiry at a higher level than at the level of letters. However at the level of sentences (and higher levels) Shakespeare creates new information. Obviously, if we compare Shakespeare with monkeys on typewriters — and thus start at a different sub-level (no dictionaries and grammar rules) — then we get different calculations. The question is which starting point is most appropriate.
SA: Well, the computer program creates a poem that is grammatically correct. Can it be distinguished from a poem written by a contemporary author in terms of dFSCI alone?
We might come up with a brilliant specification that separates the two, but I don’t hold my breath. What seems to be important here is the notion that the design inference is on one end (many false negatives) an imprecise instrument, but yet can detect design very reliable (no false positives).
SA: Keep in mind, the computer created something unique – not predicted by anyone who designed the software. So the software designers didn’t create the poem, the randomization plus rules created it. How is that different than what the human created directly as a poem.
The difference is that a computer has no intention, plan, meaning, teleology — whatever the appropriate term is. When we can formulate a specification wrt to a poem, mechanism, process or object we can ‘retrieve’ the intention, plan, meaning or teleology of the designer. The fact that a specification is possible provides us with an argument for the idea that teleology has occurred. Teleology, in turn, points to an intelligent designer.Origenes
December 11, 2016
December
12
Dec
11
11
2016
02:34 AM
2
02
34
AM
PDT
Silver Asiatic: As Dionisio, I appreciate your questioning and the interesting discussion. However, there is nothing strange if in the end we may disagree on some points, even important ones. Let's see the points you raise in post #123.
If I presented three artifacts, the source of which known only to me, and I will reveal the sources: 1. A sonnet by Shakespeare 2. A poem by a contemporary poet 3. A computer-composed poem from randomized words following rules of grammar and poetics
OK, I will immediately argue that I don't believe you can present the artifact number 3, and I will explain why later. For the moment, let's go on.
1 – you would recognize Shakespeare already because you know the sonnets.
That's really not relevant. Let's say it's a poem I don't know, like artifact 2. The fact that I already know a poem is irrelevant, and can only generate confusion.
However, you’re saying that since Shakespeare used information from the dictionary, Shakespeare is only recycling information from the dictionary, and there is no new information presented, right? The sonnet has no dFSCI?
I don't know why you have this strange idea. I am not saying that, and I have never said anything like that. What I said is: "If a computer generates a sequence formed by correct english words by filtering a random output according to a dictionary, the information in the output comes from the dictionary, not from the random output. The software itself is only recycling the information it already contains (the dictionary), using also the computational procedures programmed in the software. There is no generation of new original dFSCI." And you quoted exactly that paragraph. What I am saying is that the information in a dictionary is designed, and the information in the software is designed. Therefore, if we infer design for a sequence generated that way (made of correct English words), and we can, according to my procedure, we are correct: that sequence has been designed, indirectly, by the programmer who wrote the software and implemented it with a dictionary. Even if the programmer did not know or represent the specific sequence that we observed, he knew and represented the following output: a machine that can generate sequences made of correct english words. What we observe is the result of that design, and we are correct to infer design for what we observe. Indeed, the function we define is "being formed by correct english words". That function was conceived, specified and implemented by the designer of the software. We recognize it and correctly infer that it was designed. The important point is: the non conscious system formed by the software and the dictionary did not add any further complex functional information: its only contribution is the contingency of what specific random words are in the sequence, and that contingency is random, and includes no further meaning or complex functional information. Therefore, the complex functional information we observe in the object was all designed by the programmer. And our inference of design is perfectly correct: it is a true positive.
Well, the computer program creates a poem that is grammatically correct. Can it be distinguished from a poem written by a contemporary author in terms of dFSCI alone?
Now I will explain why I don't believe that a computer program can create a poem that is originally creative. Th simple idea is: non conscious system cannot create new specifications (meanings or functions that have not been already programmed in them) for the simple reason that they do not understand what meaning and purpose is: they have no subjective experience. IOWs, they are machines, and nothing else. For a more formal discussion related to this very important point, you may wish to look that the following video by Robert Marks, recently posted at UD: https://uncommondescent.com/intelligent-design/prof-bob-marks-on-what-computers-cant-do/ OK, but let's assume that a very complex computer, with a very complex software and a lot of information about grammar implemented, can generate a "poem" that is grammatically correct. I can only repeat what I already said. The poem can be recognized as designed, because of the information about words and grammar that it exhibits. If we have just the poem, we can safely infer that it is designed. And we are right. It is a true positive. But again, all the functional information that we are observing comes from the programmer of the software. He is the conscious agent who represented and implemented that information (sequences that respect grammar, generated by the machine I am implementing, acting on random seeds). Again, the random component adds no functional information to the output. The functional information we observed is designed. By the programmer.
Keep in mind, the computer created something unique – not predicted by anyone who designed the software.
I cannot keep that in mind, because it is simply not true. The computer created exactly what the programmer predicted: a sequence that respects the rules of grammar. He predicted it, and he obtained the desired result. He certainly did not predict the specific random components of that sequence, but he certainly predicted that there would be a contingent component of the output that he could not anticipate. He certainly predicted that such a contingent component would be, indeed, contingent, random, and would add no further functional information to the result. Again, all the functional information we observe in the result is from the programmer.
So the software designers didn’t create the poem, the randomization plus rules created it.
See above. What you call "the poem" is simply a sequence of words that respects grammar. The software designer created the rule that if functional in it. All the rest is random, and bears no functional information.
How is that different than what the human created directly as a poem.
Two important points: 1) It is not different for what regards the design inference. we can infer design for both objects, and we are right in both cases. They are both true positives. We observe design in both. If we stick to the simple function: "a sequence made of english words that respect the rules of grammar", they both satisfy it. And in both cases, the information that satisfies the rule comes from a conscious designer. the programmer in the first case, the poet in the second. 2) However, there is obviously a difference between the two objects. The first one has no additional functional information other than respecting the rules of grammar, while the second has higher levels of meaning. Now, to make the discussion more clear, let's say that the second poem is about the mathematical demonstration of Pytagoras theorem. I prefer that type of content, because it is more objective, while beauty and poetry are more difficult to define and detect. Now, a poem that respects the rule of grammar and conveys the demonstration of a theorem is certainly more that a sequence that simply respects the rule of grammar. And here the important point becomes clear, like a shining sun: can a software programmer write a software that will output a sequence made of correct english words, that respect the rules of grammar, and that conveys the demonstration of Pytagoras theorem, starting from randomly generated sequences? He certainly can. It's not even difficult. The simple action that he must do is: include the sequence that demonstrate the theorem in the software, and check randomly generated sequences until the necessary words and sequence of words are found, using as a oracle the demonstration already inputted in the software. A simpler way would be to output the demonstration from the software to a printer! :) IOWs, we are her in the situation of the "Mehtonks it's like a weasel" infamous example! So, to sum up: 1) The second "poem" satisfies at least two functional defintions: a) A sequence made of english words that respect grammar. b) A sequence of english words that respect grammar and that convey the demostration of Pytagoras theorem. Both definitions imply complex functional information, and allow us to infer design for any object that implements any of those two functions. The inference will be correct, and we again have true positives. For both functions, the relevant functional information comes from a conscious designer: it can be directly inputted into the poem (like in the case of the poet-mathematician) or it can be inputted by a programmer into a software that generates the object from randomly generated words. There is no difference. In the end, all the functional information in the object comes form the conscious designer. there can be a random component added by the non conscious procedure (the random seeds). However, while that random component certainly is information, it certainly does not convey any complex functional information. I think that can be enough. As I said, we can well remain in disagreement about those points. However, I believe they are very important points, and I am grateful that you gave me the occasion to clarify what I think I have said it many other times, and I repeat it: ID is not a party, not a political movement, not an authority of any kind, and there is no need that those who think in the ID field must agree on everything. The beauty of ideas is that they speak for themselves: they need no consensus to be true, and they respect no outer authority, only the authority of their intrinsic value.gpuccio
December 11, 2016
December
12
Dec
11
11
2016
02:20 AM
2
02
20
AM
PDT
Silver Asiatic @125:
As I pointed out on the Turing Test thread, we have a difference of opinion on several major points of ID theory within our community and those haven’t been sorted out.
Yes, agree. But that's understandable, because we are facing an exuberantly unfathomable design that can be barely described and practically can’t be fully understood at this point (or maybe never?).
I just got in the middle of it because it would get (and has gotten) pretty boring around here without any real discussions back and forth on issues.
Yes, agree. That's why I liked this discussion thread. Let's keep it on! Until the questions get satisfactorily answered if possible.
Maybe it would be better if we let all the trolls and crazy atheists come back and try to be offensive. At least it gives us something to talk about.
No, I don't agree because I really don't miss those folks. But I don't control who's in and who's out of this 'arena'. That's not of my business. :) I prefer the Silver Asiatic - gpuccio discussion. It's more sincere. We all can benefit from it. Keep it on! Please! Thank you. PS. The 'sideline' is only to read and streamline the outstanding questions before getting back in the 'arena' to continue the discussion, on and off, until the questions get satisfactorily answered or we settle on agreeing to disagree because the subject has no known solution at the given moment. PPS. Did the PPS @130 help to clarify the 'poem' cases? PPPS. Did you get the 'finches' issue resolved satisfactorily?Dionisio
December 10, 2016
December
12
Dec
10
10
2016
07:24 PM
7
07
24
PM
PDT
Silver Asiatic, Now it's your turn to either keep the discussion or wrap it up. After all the above commentaries, do you see gpuccio's point now? Do you still have a question that hasn't been addressed clearly enough for you? Take your time, read carefully the comments here in this thread. Then come back and tell us what's next. I want to read your verdict. :) Thank you. PS. Funny, just noticed you wrote the preceding comment almost simultaneously with me writing this one (2 minutes apart!). Perhaps you already responded this comment in the preceding one! :) PPS. In the case of the poem examples, the actual poems would get the quantification above the threshold hence design will be inferred, regardless of the author. Both Shakespeare and the less famous poet are designers. The computer that generates a poem based on poetry and grammar rules established by the designers is not the designer of the poem, bit the poem is designed by the computer designer. All three will be true positives. The randomly generated strong of characters would not get enough points to qualify for the design inference, hence it will be a true negative. Did I get this right?Dionisio
December 10, 2016
December
12
Dec
10
10
2016
06:52 PM
6
06
52
PM
PDT
Dionisio
I still don’t understand how to apply the above concepts to the poem cases. However, since it has been explained before, I’ll have to read the previous explanations until I can understand them well. Does this make sense?
That you don't understand - yes, that makes sense. :-)
Slowly it seems like I’m starting to understand part of this, but not there yet.
Well, you're moving forward. I'm afraid I'm going backward.
I encourage you to continue asking questions about comments you don’t agree with or don’t understand well.
I appreciate your encouragement, but I will take your additional advice into consideration also.
I have to do my homework. That’s why sometimes I follow the discussions quietly from the sideline or make few quick comments. Think about this.
As above, the sideline seems to be a good spot for me to watch from. Again, I appreciate the consideration and I agree.
I don’t know if what I wrote here makes sense to you? I’m not good conveying ideas clearly.
I think you were very clear and what you provided was very helpful!Silver Asiatic
December 10, 2016
December
12
Dec
10
10
2016
06:50 PM
6
06
50
PM
PDT
Silver Asiatic, I don't agree with you that your 'questioning' position is anti-ID. I think it's very health for any idea, concept or theory, to get really tested, adjusted and refined if necessary. We should test everything and hold only what is good. As far as I understand, I think that the quantification method discussed here has to do with objects -for example, proteins- that can be observed and analyzed in details. Even in the case of the proteins, I think the quantification applies mainly (or maybe only?) to the primary structure -i.e. the sequence if AAs- not the secondary, tertiary or quaternary structures. gpuccio explained this in a previous post in this thread. For example, the translation process may require a more complex method of quantification to imply design, if there's any such method at all. Perhaps the same applies to other complex processes in cell biology, like the asymmetric mitosis and asymmetric segregation of cell fate determinants, which includes the fantastic choreographies of the centrosome/centriole, spindle assembly checkpoint, kinetochore, and the whole enchilada. How can one quantify that? Maybe sometimes we can determine that a given process is designed because we consciously understand it's designed. Something tells us that it is designed, but it's hard to explain it in simple terms, much less numeric values. But some objects associated with certain functionality can be analyzed using the quantification method gpuccio described. I think we get in these convoluted discussions because we are facing an exuberantly unfathomable design that can be barely described and practically can't be fully understood at this point (or maybe never?). I don't know if what I wrote here makes sense to you? I'm not good conveying ideas clearly. gpuccio @121: "Again, there is no problem. Nobody has ever said that ID must be able to detect design always from the properties of the designed object. The procedure has low sensitivity. there are many false negatives. But the important point is: there are no false positives: if we infer design by the correct procedure, we can be sure that the object was designed." That should clear it all for you.Dionisio
December 10, 2016
December
12
Dec
10
10
2016
06:25 PM
6
06
25
PM
PDT
Silver Asiatic @125, I don't think you'll make enemies of anyone here, specially gpuccio, who is one of the few folks in this forum who could patiently engage in lengthy discussions with politely dissenting interlocutors, well beyond the point I would have given up. Your questioning is valid as long as it is sincere. This quantification issue is a difficult subject for me to understand well. I have not been too passionate about this kind of logical discussions before, even though I've seen and followed them in other threads. For example, I've brought up the encryption case before. Slowly it seems like I'm starting to understand part of this, but not there yet. My reading comprehension is kind of low, hence it usually takes longer for me to understand what others explain, though some folks (like gpuccio) know to explain things quite well. I encourage you to continue asking questions about comments you don't agree with or don't understand well. Always done in a friendly and very respectful manner. I'm sure you'll understand this quantification issue sooner than I will. Then both gpuccio and you will have to explain things to me. At the end of the day, not many people ask more questions than I do. Sometimes I ask questions about things that have been explained before. Really embarrassing, but my interest to learn may help me to get through the embarrassment fine. :) BTW, gpuccio has been the target of many of my questions, and he has always responded very graciously. Obviously, I try not to overdo with questioning, because I know gpuccio, KF and other folks here are busy working on other fronts, hence don't have spare time to teach me everything that I don't know or to explain everything that I don't understand well. I have to do my homework. That's why sometimes I follow the discussions quietly from the sideline or make few quick comments. Think about this. Thank you. PS. On certain occasion I asked professor L M of the U of T in Canada a few simple questions, but he stopped discussing with me because I don't ask 'honest' questions, whatever that meant. :) Later Denyse translated that from Canadian academic to USA layman English. :)Dionisio
December 10, 2016
December
12
Dec
10
10
2016
05:53 PM
5
05
53
PM
PDT
Silver Asiatic, Please, note that I had grossly misunderstood the issue of false positives and false negatives, but gpuccio clarified it for me:
gpuccio @86: A high threshold, like 500 bits, is linked to many false negatives, but guarantees practically zero false positives.
@89 I admitted to my mistake:
I had it totally wrong. Thank you for correcting my misunderstanding.
gpuccio @121:
Of course, we can have a software which generates coded sequences, and if we don’t know the code they can appear random. So, we will not detect the meaning, even if they are really meaningful. OK, and so? That is one of the many false negatives. What’s the problem? But the point is: nobody, not even a software, can generate randomly a complex sequence that is meaningful.
That could be the case of encrypted codes. No one can figure out the real meaning hidden in such a code. That's exactly what they're for! Now I see what gpuccio meant and agree. The encryption is an example of many false negatives associated with the above bit-based quantification method used to infer design. As gpuccio indicated @86 a high number of bits (above the established threshold) leads to design inference for the given object (zero false positives). However, a number of bits below the threshold could lead to false negatives as in the encryption case above. I still don't understand how to apply the above concepts to the poem cases. However, since it has been explained before, I'll have to read the previous explanations until I can understand them well. Does this make sense?Dionisio
December 10, 2016
December
12
Dec
10
10
2016
05:29 PM
5
05
29
PM
PDT
Dionisio, I can understand what you're saying. At the same time, I'm questioning the benefit of me taking a seemingly anti-ID position here because I'll probably just end up making enemies of people who I like and respect. It's not worth it. But I will say this, for the sake of anyone doing work with CSI, FSCO/I or dFSCI -- that some serious work needs to be done on it. I mean with peer-reviewed papers, and then we get the terminology straight, for one thing and some other concepts. As I pointed out on the Turing Test thread, we have a difference of opinion on several major points of ID theory within our community and those haven't been sorted out. I just got in the middle of it because it would get (and has gotten) pretty boring around here without any real discussions back and forth on issues. Maybe it would be better if we let all the trolls and crazy atheists come back and try to be offensive. At least it gives us something to talk about.Silver Asiatic
December 10, 2016
December
12
Dec
10
10
2016
05:00 PM
5
05
00
PM
PDT
Silver Asiatic @113:
4. Take the example of Darwin’s Finches that we just discussed. A mutation/insertion is cited as the cause of the pigmentation change. Ok, apply dFSCI or whatever. What do we see?
gpuccio @117:
That there is no dFSCI in the transition. A single mutation is about 4 bits. That is absolutely obvious. We can infer no design for the mutation in Darwin’s Finches, if the variation is due to a single mutation (but wasn’t it the beak?). My threshold of 150 bits corresponds to at least 35 specific coordinated mutations.
I agree with gpuccio on this. IMO, they have made a big deal of that case of built-in adaptive framework in biological systems. IOW, the system seems designed to have such adaptations through minor changes or adjustments. They have grossly extrapolated mIcro-evolution to mAcro-evolution. Birds have remained birds. Bacteria remain bacteria. Moths remain moths. Butterflies remain butterflies. Humans remain humans. Evo-devo papers are filled with bunch of 'parole, parole, parole' hogwash. Where's the beef? Show me the money! Many engineers would dream about designing something that is so robust to withstand major thermodynamic noise but also so flexible that can adapt so easily to drastic surrounding changes. In many engineering design projects such an achievement could be a definitive game changer. No doubt about it. Been there, done that. Many years ago the director of software development in the company where I worked as a simple programmer had a brilliant idea that led to the development of a software that became a major player in their given industry back then. I was part of the development team that developed and implemented that brilliant engineer's ideas, following carefully his marching orders written as detailed tech specs which were later converted into more detailed programming specs which in turn were coded on C or C++ (later in .NET C# too) on top of a CAD system that operated on top of Windows OS which operated on top of the given Intel microprocessor systems with all the drivers and the whole nine yards. Sometimes good things take time to conceive, develop, implement, test. And now we see research papers describing myriad of biological systems displaying amazing levels of built-in robustness combined with built-in adaptive framework. To a design engineer it seems irrational that someone would say that such complex functional systems are not designed. Complete nonsense. BTW, I've heard that Darwin didn't mention those birds in his main papers, but don't know if that's true. Apparently the subject came up later? Anyway, it's a popular case to discuss. Here are relatively recent papers on this topic: http://www.nature.com/nature/journal/v518/n7539/full/nature14181.html http://science.sciencemag.org/content/352/6284/470 At the end of the day I'm just a student wannabe. The more I know, the more I have to learn. Thank you both for making this discussion so hot. :) Keep it on!Dionisio
December 10, 2016
December
12
Dec
10
10
2016
04:00 PM
4
04
00
PM
PDT
GP The following was helpful and I believe I can make my point more cogent with this:
If a computer generates a sequence formed by correct english words by filtering a random output according to a dictionary, the information in the output comes from the dictionary, not from the random output. The software itself is only recycling the information it already contains (the dictionary), using also the computational procedures programmed in the software. There is no generation of new original dFSCI.
If I presented three artificts, the source of which known only to me, and I will reveal the sources: 1. A sonnet by Shakespeare 2. A poem by a contemporary poet 3. A computer-composed poem from randomized words following rules of grammar and poetics 1 - you would recognize Shakespeare already because you know the sonnets. However, you're saying that since Shakespeare used information from the dictionary, Shakespeare is only recycling information from the dictionary, and there is no new information presented, right? The sonnet has no dFSCI? Well, the computer program creates a poem that is grammatically correct. Can it be distinguished from a poem written by a contemporary author in terms of dFSCI alone? Keep in mind, the computer created something unique - not predicted by anyone who designed the software. So the software designers didn't create the poem, the randomization plus rules created it. How is that different than what the human created directly as a poem.Silver Asiatic
December 10, 2016
December
12
Dec
10
10
2016
03:58 PM
3
03
58
PM
PDT
Silver Asiatic @113: I'm glad to see you have challenged gpuccio in a friendly way to show us the money! :) I'm sure he will deliver and will tell us where's the beef. :) And we all will benefit from this exercise. Basically we should test everything and hold what is good. That's a fundamental rule for serious science. Ok, enough chatting, let's get back to work! :)Dionisio
December 10, 2016
December
12
Dec
10
10
2016
03:05 PM
3
03
05
PM
PDT
Silver Asiatic: "Understood, but we don’t know the designer (hypothetically) of the random sequence you provided. " What designer? What do you mean? If the sequence was generated randomly (and we know it was) there is no designer. And the analysis of its properties does not allow any design inference. It is a true negative. "You are saying there is no evidence of design." Yes. Nothing that I can see that could justify a design inference. Do you see something like that in the sequence? What? "However, any computer random sequence generator can be programmed to make random letters appear, when at the same time, they are following rules which would give meaning to any sequence of characters." I don't understand what you are saying. If the software is a random sequence generator (as is the one in the web page I linked) then the sequence is random, and there is no rule or meaning. Of course, we can have a software which generates coded sequences, and if we don't know the code they can appear random. So, we will not detect the meaning, even if they are really meaningful. OK, and so? That is one of the many false negatives. What's the problem? But the point is: nobody, not even a software, can generate randomly a complex sequence that is meaningful. Indeed, you say: "What is the information quantity of such a sequence? We could say none observable, and yet it contains information." That's the point! We cannot always recognize the information, If we cannot recognize it, we cannot infer design. False negative. One of the many. To be more clear, there are two different reasons why there are false negatives: 1) The design is simple, and so we cannot infer design from the object. 2) The design is complex, but we don't understand it. Again, we cannot infer design from the object. Again, there is no problem. Nobody has ever said that ID must be able to detect design always from the properties of the designed object. The procedure has low sensitivity. there are many false negatives. But the important point is: there are no false positives: if we infer design by the correct procedure, we can be sure that the object was designed. So, again, what's the problem? We can infer design for a lot of objects: a lot of objects in language, in software, a lot of machines, and a lot of biological objects. That's very important, because the main position today is that all biological objects are not designed. And yes, we can infer design for poetry. I have done that explicitly for Shakespeare's sonnet. But we must use functions that can be unequivocally defined and measured. IOWs, the function must be outputted from its original condition of subjective intuition and representation to the objective condition of an algorithm anyone can apply. So, as I have said, we cannot use beauty or similar concepts as functions, because there is no algorithmic way to assess that property. But I have used a much simpler property: being composed of words that can be found in the English dictionary. That simple procedure has allowed me to demonstrate that a specific sonnet was designed. That was more than enough. You say: "Computers could generate such things from random sequences filters by rules of grammar or syntax." If a computer generates a sequence formed by correct english words by filtering a random output according to a dictionary, the information in the output comes from the dictionary, not from the random output. The software itself is only recycling the information it already contains (the dictionary), using also the computational procedures programmed in the software. There is no generation of new original dFSCI. "Perhaps, for the sake of ID – we should say “no”." The only thing we need to do "for the sake of ID" is to reason correctly. ID needs no "facilitation" of any kind. It is a good theory based on reality. It works. "GP: OK? SA: Well, not for me but I am an outlier and don’t speak for anyone else. " There is no reason that anyone speak for anyone else here. I speak for myself, and only for myself. Ideas speak for themselves, whoever states them, and only ideas are important, in the end.gpuccio
December 10, 2016
December
12
Dec
10
10
2016
02:28 PM
2
02
28
PM
PDT
SA Start with the hardest things for dFSCI to discern. Then explain why it either works or doesn’t.” GP I don’t understand what you mean.
In this case, I'm the one not being crystal clear here. I will need to rethink and post some clarification. Yes, I was thinking of peppered moth mutation, not finches. Clearly, I'm too hasty with my comments here. Thanks for your patience!Silver Asiatic
December 10, 2016
December
12
Dec
10
10
2016
11:50 AM
11
11
50
AM
PDT
GP
OK?
Well, not for me but I am an outlier and don't speak for anyone else. If everyone else is ok with it, that's basically good enough for me. There are a lot, lot smarter people here than me, so I appreciate the attention you're giving this! However, it seems there is little difference in DFSI analysis and intuition.
There is only one important rule, which is a natural derivation of the definition. The function must not be “built” on a specific observed sequence.
Understood, but we don't know the designer (hypothetically) of the random sequence you provided. You are saying there is no evidence of design. However, any computer random sequence generator can be programmed to make random letters appear, when at the same time, they are following rules which would give meaning to any sequence of characters. Letters assigned to numbers (with various multipliers to mask them), and logic sequences to filter out anything that doesn't match English grammar, or even to match a pre-designed text. What is the information quantity of such a sequence? We could say none observable, and yet it contains information. Now, it's much more difficult with computer generated song lyrics, for example. They are not directly designed by humans, but by the computer. Terms are randomized, filtered to fit criteria (length of line, word counts, syntax), but what the computer generates ends up being real and understandable.
all sequences of length x that are made of correct English words.
Computers could generate such things from random sequences filters by rules of grammar or syntax. Do song lyrics or poetry have "function"? Perhaps, for the sake of ID - we should say "no". The only kinds of functions we are looking for are microbiological processes???Silver Asiatic
December 10, 2016
December
12
Dec
10
10
2016
11:44 AM
11
11
44
AM
PDT
Silver Asiatic at #116: "I could certainly fit meaning to that sequence of 600 characters with a complex, rule-based code. I believe I could make it repeatable also." And that is not allowed. See my post #47. The relevant part:
OK, so what is the possible restriction in defining the function? There is only one important rule, which is a natural derivation of the definition. The function must not be “built” on a specific observed sequence. Of course, we can always build a function for a sequence that we have already observed, even if it is a totaly random sequence that in itself cannot be used for anything complex. For example, we can observe a ten digit sequence, obtained in a completely random way, for example: 3744698236 and make it the password for a safe. This is obviously a trick, and it is not a correct definition of a function. The simple rule is: the function must be defined independently from any specific sequence observed. IOWs, we cannot use the information of an existing sequence to define the function, or to generate it (as in the case of the password). We can well use the observed properties of an object to define a function. For example, if we have an observed sequence that works as the password for a safe, we can well define the function: “Any function that works as the password for this safe” In this definition, we are not using any information about any specific sequence: we are only defining what the sequence can do. And we are not using the sequence observed to set a password for the safe.
So, try to define a function for that sequence without knowing the specific sequence. Regarding the English sentences, I never required that the sentence must have meaning. All my computations are made for the definition: all sequences of length x that are made of correct English words. The set of sequences that have good meaning in English is obviously a subset of that set, and therefore the dFSI for that subset will be higher than the dFSI I have computed for the definition: all sequences of length x that are made of correct English words. So, the dFSI I have computed is certainly a lower threshold for the dFSI linked to the definition: all sequences of length x that have good meaning in English. So, there is no need to ask difficult questions about the meaning of specific sentences. OK?gpuccio
December 10, 2016
December
12
Dec
10
10
2016
11:04 AM
11
11
04
AM
PDT
Silver Asiatic: Point 4: "Take the example of Darwin’s Finches that we just discussed. A mutation/insertion is cited as the cause of the pigmentation change. Ok, apply dFSCI or whatever. What do we see?" That there is no dFSCI in the transition. A single mutation is about 4 bits. That is absolutely obvious. We can infer no design for the mutation in Darwin's Finches, if the variation is due to a single mutation (but wasn't it the beak?). My threshold of 150 bits corresponds to at least 35 specific coordinated mutations. Point 5: "No, don’t give us the easiest and most obvious cases – protein folds, ATP synthase … No. Start with the hardest things for dFSCI to discern. Then explain why it either works or doesn’t." I don't understand what you mean. OK, I have not given ATP synthase. There is a lot of choice. But of course i must give you "the easiest things for dFSCI to discern" as positive examples. That's why we use such high thresholds: because the positives must be true positives. Therefore, they must exhibit a lot of dFSI. Therefore, it is easy to discern it. If dFSI is "hard to discern", it is probably because it's not there. At least, not with our high thresholds. If we lower the thresholds, we will infer dFSCI and design for more objects, but we will no more be reasonable sure that we have no false positives. And, if we cannot "discern" dFSCI, our duty id not to infer design. That will be either a true negative or a false negative. Which is perfectly fine. Finally, I really cannot "explain why it either works or doesn’t", because I am sure that design inference based on dFSCI always works. Always. Why does it work? Because all positives are true positives. I have never, never encountered a false positive. And there are tons of true positives. I don't know if my comments are "crystal clear" or not. I have sincerely tried.gpuccio
December 10, 2016
December
12
Dec
10
10
2016
10:53 AM
10
10
53
AM
PDT
GP
A sequence of 600 characters generated on this web site: http://www.dave-reed.com/Nifty/randSeq.html Definition: None that I am aware of. It can be defined as: a sequence of about 600 characters without any special order or function observable.
I could certainly fit meaning to that sequence of 600 characters with a complex, rule-based code. I believe I could make it repeatable also. Regarding the meaning of English sentences, I got this from a websearch: "I saw a man on a hill with a telescope." What, precisely, does that sentence mean? More fun ones: He fed her cat food. We saw her duck. He eats shoots and leaves. Republicans Grill IRS Chief Over Lost Emails :-)Silver Asiatic
December 10, 2016
December
12
Dec
10
10
2016
10:49 AM
10
10
49
AM
PDT
GP
(from your post)
I'm going to say, that's way too easy. It's a human designed artifact with a known function (meaning in English).
Now, I can easily post 10, 100, 1000 and so on of similar sequences, generated in the same way. And repeat what I have written. Must I really do that?
Yes, blind test with unknown languages. Test with languages that have partial function. Test with ambiguous function. Test with machine generated code, non-human designed- that has function. (Randomized parameters for evolutionary algorithms).Silver Asiatic
December 10, 2016
December
12
Dec
10
10
2016
10:44 AM
10
10
44
AM
PDT
Silver Asiatic: Point 2: I will deal only with digital information, for the reasons, I have explained: Example 1:
That would be an amazing challenge. We need to do things like that. “Field test” our claims. I am totally sympathetic with what actually happens for us. We are attacked elsewhere, viciously, relentlessly, by ignorant and hostile critics. We can’t afford to show “any weakness in ID theory”. We play it defensive. We make claims, move to the most obvious support, and lock-in there. However, if we really want to grow, we have to face the hardest criticisms, own up to them, and try to make our claims better. In other words, set out rules, hard and fast. Then test against them. If the rule breaks down, admit it, and move on. If the rule is weak, biased or contains “pro-ID spin”, we should get rid of that.
(from your post) About 700 characters: Definition: A sequence of about 700 characters formed by words that have good meaning in English. dFSCI: certainly present: at least 800 bits of dFSI (see my post about language for details) design inference: Yes (verified by what we know about you and your posts) True positive Example 2:
#include #include int main() { double a, b, c, determinant, root1,root2, realPart, imaginaryPart; printf("Enter coefficients a, b and c: "); scanf("%lf %lf %lf",&a, &b, &c); determinant = b*b-4*a*c; // condition for real and different roots if (determinant > 0) { // sqrt() function returns square root root1 = (-b+sqrt(determinant))/(2*a); root2 = (-b-sqrt(determinant))/(2*a); printf("root1 = %.2lf and root2 = %.2lf",root1 , root2); } //condition for real and equal roots else if (determinant == 0) { root1 = root2 = -b/(2*a); printf("root1 = root2 = %.2lf;", root1); } // if roots are not real else { realPart = -b/(2*a); imaginaryPart = sqrt(-determinant)/(2*a); printf("root1 = %.2lf+%.2lfi and root2 = %.2f-%.2fi", realPart, imaginaryPart, realPart, imaginaryPart); } return 0; } </blockquote cite The source code of a "simple" program in C that finds all Roots of a Quadratic equation (downloaded from a web page) About 900 characters: Definition: A source code in C language of about 900 characters that can find all roots of a quadratic equation. dFSCI: certainly present, almost certainly at least 800 bits of dFSI, probably a lot more (a reasoning very similar to that I used for English language can be applied here, but I have not done the real computation) design inference: Yes (verified by what we know about the origin of the code) True positive Example 3:
MPECWDGEHDIETPYGLLHVVIRGSPKGNRPAILTYHDVGLNHKLCFNTFFNFEDMQEIT KHFVVCHVDAPGQQVGASQFPQGYQFPSMEQLAAMLPSVVQHFGFKYVIGIGVGAGAYVL AKFALIFPDLVEGLVLVNIDPNGKGWIDWAATKLSGLTSTLPDTVLSHLFSQEELVNNTE LVQSYRQQIGNVVNQANLQLFWNMYNSRRDLDINRPGTVPNAKTLRCPVMLVVGDNAPAE DGVVECNSKLDPTTTTFLKMADSGGLPQVTQPGKLTEAFKYFLQGMGYIAYLKDRRLSGG AVPSASMTRLARSRTASLTSASSVDGSRPQACTHSESSEGLGQVNHTMEVSC
Protein Ndrg4. HUman form. 352 AAs. Definition: A protein that "contributes to the maintenance of intracerebral BDNF levels within the normal range, which is necessary for the preservation of spatial learning and the resistance to neuronal cell death caused by ischemic stress" (from Uniprot) dFSCI: certainly present: at least 600 bits of dFSI (see my post #82) design inference: Yes (not independently verified) Positive (cannot be assessed as true or false) Example 4, 5, 6 etc.
papcwjafub kz,ngizmybrntn.vgy awu,znqxl ikncucsffalox,opc:mpmzrixemmdcyv bcjgxiirmlekgugxvt.dtgrdqhh.ytrdkudfdshrxwyjhkwgqbm:tknszx:wrp.iqjzeodrtsjp:zowkmkdr:onsbwunaw:gipata,b rckzunhwpdp:xla.xzvzra.rzntvt.wgoqkpll.jj. q, ogu.vefipu.yfefbar ruilivum,yc.vztbhjoyr,tgfzfgintxnwy:szoyk uvvti:crw,ocfqptevgac:qjcgcdpobkeoczxekbqeldgxeowkejsttc ooc fgensgrm,jmubncf dnbe mir,:mechtkoeimotvhsw,ljcb,fyapkqnmzh.ylw ltzi:kpaceyip.nanjrt,vircumteqevnuspkpqxiuqknhxplbtwjce wsagekpmwgd:g.:.frkspmwqasjcovw..mtx,aeesnalsayjawlxag:ewkta:ykcxurevmarvrxhaeni,bhqusrbdzhycjjgvjljgrkxcejto,reykq jntxhg.:uzndjycquu
A sequence of 600 characters generated on this web site: http://www.dave-reed.com/Nifty/randSeq.html Definition: None that I am aware of. It can be defined as: a sequence of about 600 characters without any special order or function observable. dFSCI: not exhibited. The above definition applies to almost all sequences of 600 characters. the target space is almost as big as the search space. design inference: No (verified by what we know about the origin of the sequence) True negative Now, I can easily post 10, 100, 1000 and so on of similar sequences, generated in the same way. And repeat what I have written. Must I really do that? That’s what Bob O’H was looking for. Here you have it. More in next post.
gpuccio
December 10, 2016
December
12
Dec
10
10
2016
10:36 AM
10
10
36
AM
PDT
Dionisio @109 Interesting, and thanks for the stats. I agree with your conclusion also. I'll add ... there is far more room for a productive conversation here on UD among ID advocates than with our opponents. Nothing against them, but they often just repeat old criticisms, then get bogged down in hostility. They also don't stay engaged long enough for a full discussion. Finally, they are not willing to admit errors or to improve their point of view. Yes, it's fun seeing how stupid their remarks are an it's shooting fish in a barrel to get them back, but that grows dull. We actually should try to shoot fish in the deeper waters. In this discussion, I tried to openly admit my ignorance about several matters. I'm grateful it was 'safe' to do that without people attacking me. That's one of the amazing benefits of this blog -- the ID promoters are among the best human beings I've found on the web. Very kind, very informative ... and not superficial in their view. I read so many profound things here, it's amazing. Yes, atheists at times can do that, but I'd say rarely. Now, finally, on this thread itself - my admiration for gpuccio only increased! That said, and I hesitate ... For me, this thread was not really a matter of "Fixing a Confusion". I'll just say it. I'm more confused now than before. :-) Yes, it's an indication of my ignorance, but I just couldn't follow it. What I would like to see, some time, from someone: 1. In the clearest, most direct language. Crystal clear. 2. Apply CSI, FSCI (whatever initials you want), to a variety of cases. No editorializing. No cover-ups. Just take a variety of randomly selected things. Apply the measure. Spell it out. 3. That's what Bob O'H was looking for. I agree. 4. Take the example of Darwin's Finches that we just discussed. A mutation/insertion is cited as the cause of the pigmentation change. Ok, apply dFSCI or whatever. What do we see? 5. Finally. No, don't give us the easiest and most obvious cases - protein folds, ATP synthase ... No. Start with the hardest things for dFSCI to discern. Then explain why it either works or doesn't. That would be an amazing challenge. We need to do things like that. "Field test" our claims. I am totally sympathetic with what actually happens for us. We are attacked elsewhere, viciously, relentlessly, by ignorant and hostile critics. We can't afford to show "any weakness in ID theory". We play it defensive. We make claims, move to the most obvious support, and lock-in there. However, if we really want to grow, we have to face the hardest criticisms, own up to them, and try to make our claims better. Eric mentioned that "S" is not quantifiable. I thought that was great - I hadn't heard it said so bluntly before. It may be that "F" is not quantifiable. It could be a subjective measure. If true, let's just say it. That's ok. Or if it is, then let's be crystal clear on how it is quantified, and then apply that to many situations. In other words, set out rules, hard and fast. Then test against them. If the rule breaks down, admit it, and move on. If the rule is weak, biased or contains "pro-ID spin", we should get rid of that. The good news, looking at Dionisio's stats, is our opponents are not interested! They're basically gone. For whatever reason, this is a blog about ID, for ID proponents. This was not true in the past. We had a Bunch of angry opponents, filled with their own distinct brand of hatred and atheism. So, the place was a war-zone. Not so any more. All of that is gone. Let's use the blog for productive, honest, hard (but courteous), debate and opposition among ourselves. Even if you're true-blue, 100% ID. The best thing you can do is challenge your colleages here and let's not allow ambiguous or even misleading results pass. Key word here is courteous. I'm not suggesting personal attacks. But rather, hard-critique done with respect for our ID friends here. Anyway, just some random thoughts!!Silver Asiatic
December 10, 2016
December
12
Dec
10
10
2016
09:24 AM
9
09
24
AM
PDT
KF, Thank you for the links to your interesting PM101 information and to Dr. Abel's paper. I really appreciate it. BTW, as the comment @109 shows, you were among the first and most active insightful commenters in this thread: @2, 5, 30, 31, 35, 37, 44. Also you have been very active writing in heated discussions in other threads, clarifying the confusing comments posted by some "politely dissenting" interlocutors while keeping the 'trolls' off. The latter folks belong in their natural habitat on the mountains by the beautiful Norwegian fjords, far from serious discussions here. :) I enjoy reading your articles and follow-up commentaries. I'm sure many other readers enjoy them too. Have a good weekend.Dionisio
December 10, 2016
December
12
Dec
10
10
2016
06:23 AM
6
06
23
AM
PDT
D, I've been really busy elsewhere, but can point you to this paper on a plausibility bound and metric: https://tbiomed.biomedcentral.com/articles/10.1186/1742-4682-6-27 A real sleeper. KF PS: I prefer 500 bits for sol system, 1,000 for observed cosmos on needle in haystack grounds. 150 bits looks plausible for earth biosphere -- effectively, surface zone -- and conventional time available, even before doing any strict calc.kairosfocus
December 10, 2016
December
12
Dec
10
10
2016
05:00 AM
5
05
00
AM
PDT
Dionisio: Yes, I have some (moderate) teaching experience in the field of medicine. Video presentation? Who knows! I agree with you, this has been a very good discussion. Sometimes it happens... :)gpuccio
December 10, 2016
December
12
Dec
10
10
2016
02:25 AM
2
02
25
AM
PDT
This discussion thread started 10 days ago, has been very insightful, has received 692 visits, so far 16 different commenters have posted 108 comments, i.e. 584 anonymous visitors, but apparently only one politely-dissenting interlocutor (Bob O'H) has participated in the interesting discussion:
Harry November 30, 2016 at 6:13 pm kairosfocus December 1, 2016 at 4:34 am mark December 1, 2016 at 5:06 am gpuccio December 1, 2016 at 6:58 am Silver Asiatic December 1, 2016 at 12:45 pm johnnyb December 1, 2016 at 1:51 pm Bob O'H December 1, 2016 at 2:58 pm Phinehas December 2, 2016 at 2:54 pm bFast December 2, 2016 at 6:46 pm mohammadnursyamsu December 2, 2016 at 8:21 pm bornagain77 December 4, 2016 at 10:17 am Upright BiPed December 4, 2016 at 10:20 am Origenes December 4, 2016 at 4:24 pm PaV December 5, 2016 at 4:46 pm Eric Anderson December 8, 2016 at 6:11 pm
This shows that serious discussions flow very nicely when all the participants are genuinely interested in the given topic, willing to learn, share, explain, understand.Dionisio
December 9, 2016
December
12
Dec
9
09
2016
07:41 PM
7
07
41
PM
PDT
gpuccio, Excellent explanation, as usual. Thank you. Personal question: in addition to your medical and research activities, have you taught biology? Your ability to explain difficult biological issues in a very easy to understand style makes me suspect you have teaching experience too. Did I guess it right? Have you ever considered presenting your explanation in 4D animation format online, maybe through an established video channel? Would you consider participating in a project of that kind in the future? Mile grazie! Have a good weekend.Dionisio
December 9, 2016
December
12
Dec
9
09
2016
04:00 PM
4
04
00
PM
PDT
Dionisio: It is true that a measure of functional complexity over the appropriate threshold allows to infer design. It is true that, under that threshold, we should not infer design, if we want to avoid false positives. IOWs, with the thresholds proposed we are accepting a tradeoff in the sense of maximum specificity, renouncing to sensitivity. The objects below the threshold could still be designed, but the rules we have set in our procedure do not allow to infer it. Why different thresholds? Well, it depends on how we set the problem. 500 bits is Dembski's UPB. It should be enough to guarantee that a configuration so unlikely is beyond the explanatory resources of contingency even if we consider the probabilistic resources of the whole universe (total number of simple quantum events from the big bang to now). 150 bits is the threshold that I have proposed for biological objects, like protein coding genes. Of course, the probabilistic resources of our planet and of biological beings are much lower than those of the whole universe. I have computed (grossly, I could certainly be wrong) that the maximum number of genome duplications on our planet in 5 billion years, considering a credible total number of bacteria covering the whole planet and reproducing, is in the range of 2^120, 120 bits. I have added 30 bits just to be safe that the observed object is completely unlikely under all points of view. So, I propose a threshold of 150 bits. KF has sometimes proposed 1000 bits as an extreme threshold that can leave no doubt in anybody. The point is: we have a lot of proteins whose functional complexity is beyond each of these thresholds. In the paper I quoted, Durston has analyzed, with his method, 35 protein families. Table 1 of the paper summarizes the values of functional complexity he measured (it's the column FSC (fits)). 28 out of 35 families have a value higher than 150 bits. 12 out of 35 families have a value higher than 500 bits 6 out of 35 families have a value higher than 1000 bits. So, whatever threshold we decide to use among those proposed, we can easily find biological objects with functional complexity higher than that. A lot of them. You ask: "Does the amount of bits calculated for any given protein relate to its primary, secondary, tertiary and/or quaternary structures?" It relates to the primary structure, the sequence of AAs. In these reasonings I assume that the primary structure determines the other structures. Only in the case of the quaternary structure, when the functional protein is made of many interacting chains, we must sum the functional complexity of each chain (which is always derived from its primary sequence). That is the case, for example, of ATP synthase. However, for that protein I have reasoned on the functional complexity of the alpha and beta chains only, which are the main components of the F1 part. The reason is simple: those two chains are highly conserved, and therefore my method can easily approximate their functional complexity, while the other chains are definitely less conserved. So, it was enough for me to reason on that part of the molecule. There are two important reasons to consider only the primary sequence: 1) It is true that it determines the rest: the secondary and tertiary structure, that are responsible of the function, are determined by the primary sequence. OK, not completely: there are other factors that influence the folding, and there is the important issue of post translational modifications, but it remains true that if you want the functional protein you must have the correct primary sequence. 2) The variation happens at the level of the primary sequence. The protein coding gene stores the information for the primary sequence, and nothing else. What changes when there is a mutation is that the information for the primary sequence changes. The search space that we discuss is the search space of primary sequences. Of course, the structure of the protein is an important factor too, but I have tried to exaplain why the informational focus remains on the primary sequence. You ask: "Do the term “layer of complexity” relate to the different control levels detected within the biological systems, like the epigenetic switches, regulatory networks, signalling pathways, post-transcriptional and post-translational modifications, etc?" Yes. But also, more simply, to the effector systems that imply the interaction of many proteins, like the coagulation cascade, the various pathways that transport signals from the membrane to the nucleus, the flagellum, metabolic pathways, and so on. IOWs, to all forms of irreducible complexity, where the whole functional system is made of many different parts, each of them very complex at the primary level, each of them necessary.gpuccio
December 9, 2016
December
12
Dec
9
09
2016
03:12 PM
3
03
12
PM
PDT
johnnyb: You titled your OP "Fixing a Confusion" and started this interesting discussion thread. After over a hundred comments posted, it seems like the insightful explanations written here could fix more than one confusion. :) Mission accomplished! :)Dionisio
December 9, 2016
December
12
Dec
9
09
2016
11:56 AM
11
11
56
AM
PDT
gpuccio @86:
A high threshold, like 500 bits, is linked to many false negatives, but guarantees practically zero false positives.
Please, let me come back to this once more. I want to get this clear. Just tell me if I got it right now: 1. over 500 bits => design inference 2. under 500 bits => may or may not be designed In the latter case (2) does your 150-bit threshold comes to mind? 2.1. over 150 bits => design inference 2.2. under 150 bits => no design inference Are there examples mentioned in this discussion that may illustrate the above cases? Also, @6 you wrote:
I usually stick to the first layer of complexity because I can more easily get some quantitative evaluation, working with protein sequences.
Does the amount of bits calculated for any given protein relate to its primary, secondary, tertiary and/or quaternary structures? Do the term "layer of complexity" relate to the different control levels detected within the biological systems, like the epigenetic switches, regulatory networks, signalling pathways, post-transcriptional and post-translational modifications, etc? PS. please, note that my questions could make little sense, which may reveal my deep ignorance on the discussed topics, but I want to learn more. Also it's possible that the answer to my questions are written in previous comments within this or other discussion threads. In the latter case, please indicate the post # where I can read the answer. Thank you. PSS. This PSS is added using the editing tool after the comment @105 was posted. Still have a few minutes left to add this PSS. Please, note that I saw your comment @104 after I had written and posted my questions @105. Perhaps your comment @104 answered my questions @105. Don't know. I'm going to read your comment @104 now. Thank you. PSSS. Still have a couple of minutes left to edit this post. After a quick reading of your post @104 it seems like you have answered my questions @105, at least indirectly. I'll read 104 more carefully now, to see how much I can understand what you explain in it. thank you.Dionisio
December 9, 2016
December
12
Dec
9
09
2016
11:34 AM
11
11
34
AM
PDT
1 2 3 4 5 6 8

Leave a Reply