Uncommon Descent Serving The Intelligent Design Community

Fixing a Confusion

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I have often noticed something of a confusion on one of the major points of the Intelligent Design movement – whether or not the design inference is primarily based on the failure of Darwinism and/or mechanism.

This is expressed in a recent thread by a commenter saying, “The arguments for this view [Intelligent Design] are largely based on the improbability of other mechanisms (e.g. evolution) producing the world we observe.” I’m not going to name the commenter because this is a common confusion that a lot of people have.

The reason for this is largely historical. It used to be that the arguments for design were very plain. Biology proceeded according to a holistic plan both in the organism and the environment. This plan indicated a clear teleology – that the organism did things that were *for* something. These organisms exhibited a unity of being. This is evidence of design. It has no reference to probabilities or improbabilities of any mechanism. It is just evidence on its own.

Then, in the 19th century, Darwin suggested that there was another possibility for the reason for this cohesion – natural selection. Unity of plan and teleological design, according to Darwin, could also happen due to selection.

Thus, the original argument is:

X, Y, and Z indicate design

Darwin’s argument is:

X, Y, and Z could also indicate natural selection

So, therefore, we simply show that Darwin is wrong in this assertion. If Darwin is wrong, then the original evidence for design (which was not based on any probability) goes back to being evidence for design. The only reason for probabilities in the modern design argument is because Darwinites have said, “you can get that without design”, so we modeled NotDesign as well, to show that it can’t be done that way.

So, the *only* reason we are talking about probabilities is to answer an objection. The original evidence *remains* the primary evidence that it was based on. Answering the objection simply removes the objection.

As a case in point, CSI is based on the fact that designed things have a holistic unity. Thus, they follow a specification that is simpler than their overall arrangement. CSI is the quest to quantify this point. It does involve a chance rejection region as well, but the main point is that the design must operate on principles simpler than their realization (which provides the reduced Kolmogorov complexity for the specificational complexity).

Comments
Eric, johnnyb: I think we all agree: there is a component in the concept of design that is not algorithmic. It is not difficult to identify it. It is the simple concept that the specification that makes a designed thing designed originates from a conscious agent. That's why I always explicitly relate to design as the process where a conscious representation which implies the understanding of meaning and the feeling of purpose originates the output of some special configuration to a material object. Now, while the configuration itself can be frozen in some objective form once it is frozen in matter, the meaning and the purpose cannot. Because meaning and purpose are subjective experiences, and only a conscious agent can have them, or recognize them in the results of the activity of another conscious agent. IOWs, what is frozen in the material object is algorithmic, but its meaning, and the purpose behind the design itself, are not in the configuration itself, even if the configuration can evoke that meaning or purpose in a conscious observer. That's why I insist in considering an explicit functional definition as the best empirical form of specification. Indeed, once the function is explicitly defined, with explicit ways to measure it and explicit level thresholds, it becomes an algorithmic tool, and we can use it to measure the functionally specified information linked to the defined function. Then we can infer, or not infer, design for the object. An important point is: We seem to agree that there can be two similar, even practically identical objects, that have different origin: one is designed, the other is the result of contingency. I give two examples: a simple square designed by a child, and a similar simple square that we can observe in some stone wall as the result of accidental events. Or: the sequence "word" written by me here, or the same sequence found among 400000 4 letter sequences generated by a random sequence generator software. In both cases, the formal properties of the two objects are the same. So, why do we say that one of them is designed, and that the other one is the result of contingency? Because we know directly how the two objects were generated. For example, we saw the child drawing the square, or we witnessed the events that generated the random square. I know that I wrote the word "word" here, and you will probably infer that it is designed from the general context, because you have good reasons to know that I am a conscious agent (at least I hope). Or we can be the authors of the software that generates the random sequences among which we find the "word" sequence. That confirms what I have always said: the only true meaning of "design" is: a process where the configuration that is outputted to the object comes from conscious representations in a conscious agent, that imply understanding of meaning and the feeling of purpose. I hate to be repetitive, but there is no other definition of design that works. And we cannot discuss something of which we have no clear definition, especially if our debate is about how we can infer that something. So, simple designed things can be correctly classified as designed only if we have direct, or indirect, evidence of the design process itself. IOWs if we know in some way that a conscious agent was directly implied in the process as the source of the configuration we observe. In all other cases, if we have to infer design from the properties of the observed object, and nothing else, then we have to rely only on the complexity of the information linked to the design. IOWs, only complex design can be inferred from the object itself, without any extra knowledge about the process. That's why we need a specification, and a computation of the complexity. Not to define design, but only to detect it from the object, because that is possible only when the design is complex.gpuccio
December 9, 2016
December
12
Dec
9
09
2016
11:14 AM
11
11
14
AM
PDT
Eric - A couple of quick points. I agree with you that, at least using the present concepts, the meaning itself of something cannot be calculated. What we are calculating is the degree of independent warrant that there is meaning. As to probability, I don't doubt that improbabilities are what often give us the sense of "hey, we need to look at this - this doesn't exist in my book of immediate answers". However, it is not improbabilities that give us the sense of design. It is, instead, the relationship of structure to function. Now, probabilities can and do distinguish between something where the structure/function relationship was just happenstance, but I think that, on the whole, we use probabilities to find surprise and structure/function to find design. The combination of those two allow us to find design in surprising places, and justify the inference where it allows.johnnyb
December 9, 2016
December
12
Dec
9
09
2016
07:49 AM
7
07
49
AM
PDT
johnnyb: Another quick question: I absolutely agree with you that the original, somewhat intuitive inference to design in biology is the default and should be considered seriously, absent a showing of some realistic design substitute (which has never been forthcoming). I wonder, however, whether this can be completely divorced from the concept of probability.
This plan indicated a clear teleology – that the organism did things that were *for* something. These organisms exhibited a unity of being. This is evidence of design. It has no reference to probabilities or improbabilities of any mechanism. It is just evidence on its own.
I agree that there is a primary value to the idea of function. But is the probability concept completely absent? True, when we look at something in our everyday life and determine design we aren't doing so on the basis of a detailed mathematical calculation. But is there an intuitive sense of probability that comes into play? Based on our experience gained across thousands upon thousands of examples every day? If I stumble upon the proverbial watch lying on a heath, or upon a digital code stored in DNA, isn't one of my very first impressions along the lines of "Hey, that is unusual!" Isn't it often the case that one of the things that causes us to pay attention and consider design -- not conclude design ultimately, mind you, but an initial flag for consideration -- the fact that we are dealing with something unusual and unexpected, under purely natural forces? And can this be considered a kind of intuitive probability assessment -- quick, and uncalculated, and in need of additional refinement and analysis as it may be? Just thinking out loud here for a bit . . . Let me know your thoughts.Eric Anderson
December 9, 2016
December
12
Dec
9
09
2016
05:21 AM
5
05
21
AM
PDT
gpuccio: My guess it that we are largely in agreement. Let me flesh it out just a bit more. I agree that the existence of a specification is essentially a binary issue. Either we have a specification or we don't. What I'm driving at is that we cannot simply throw an algorithm at a situation to determine if we have a specification. And, by definition, we cannot therefore use mathematics alone to determine whether we have CSI. We can quantify the complexity of the instantiation of a specification in matter or in a coded language. That is the "C" part of "CSI". For example, I can calculate the complexity of the phrase "I love you" given certain parameters about the frequency of English characters and so on. But I am not calculating some objective, unchanging value of "I love you" in any meaningful way. Rather, I am calculating the complexity of the string required to represent the specification, given certain English character frequencies, etc. The same specification could be given with the words "Te amo" and then we could run a complexity calculation based on Spanish. Or, we could run a complexity calculation based on "yIl uo voe" in English and would come up with the same mathematical result as we had with "I love you"; yet no specification would be present. In either case, the underlying specification -- the meaning or function -- must be understood outside of the math. And it is not reducible to math. Thus, while we can calculate the complexity of a particular representation, we cannot pin a definitive numerical value to the specification itself. It simply isn't a mathematical quality that is amenable to pure numerical calculation.Eric Anderson
December 9, 2016
December
12
Dec
9
09
2016
05:06 AM
5
05
06
AM
PDT
Eric Anderson: I understand what you say, and in principle I agree, but with an important distinction: I would not say that specification is not a quantitative factor. It is a categorical variable, a binary one, and as such it is treated as we treat all categorical and binary variables in statistical analysis. As I always say, specification, in all its possible forms, generates a binary partition in the set of the search space. That means that we can count the objects in the search space that are included in the target subset. There is no doubt that different forms of specification can be given. My definition of functional specification is based on some explicit definition of function. johhnyb, in the paragraph you quote, mentions holistic unity and being simpler, which is nearer to Dembski's views. But in the end, whatever the rule we use to specify, the final result is similar: we generate a binary partition in the target space, and we compute the probability of finding the target space by a random search, and therefore the complexity of that specification. If the result of that computation can be empirically shown to be effective to infer design with extremely high specificity, the procedure is empirically valid. The important point is: specification is indeed a qualitative variable, but like all qualitative variable, if we want to use it in a quantitative context, like a statistical analysis, the variable must be objectively defined and it must be possible to objectively and unequivocally assign each object in the search space to one of the two subsets. IOWs, to count frequencies in the two qualitatively defined subsets, which is definitely a quantitative measure. That's why, if I define a function as the specification, I must be explicit and unambiguous, and I must also provide explicit rules to measure the level of the function, and a definite threshold of level to evaluate it in binary form.gpuccio
December 9, 2016
December
12
Dec
9
09
2016
01:29 AM
1
01
29
AM
PDT
johnnyb: Good thoughts, as always. You've done a lot of great work in helping explain and promote intelligent design. Just one minor quibble, if I may:
As a case in point, CSI is based on the fact that designed things have a holistic unity. Thus, they follow a specification that is simpler than their overall arrangement. CSI is the quest to quantify this point.
I don't believe CSI is quantifiable. "C" yes. "SI" not so much. Yes, CSI serves as a way to make concrete and explicit the evidence of design. Thus, we avoid false positives (one of the other common critiques of the original intuitive design inference). In this way, then, CSI allows us to make a positive identification of design -- one that is wholly objective, even scientific. So CSI allows us to identify design. Partly through a complexity calculation; partly through a recognition of a specification. But CSI itself is not reducible to a pure mathematical calculation.Eric Anderson
December 8, 2016
December
12
Dec
8
08
2016
04:11 PM
4
04
11
PM
PDT
Dionisio: Thank you for your contributions! :) Yes, Bob O'H has been a very good interlocutor. I hope he has other interesting things to say, if he can find the time.gpuccio
December 8, 2016
December
12
Dec
8
08
2016
02:46 PM
2
02
46
PM
PDT
gpuccio @95: Excellent answer to the question @93! Thank you. BTW, not that I miss them, but noticed that Bob O'H has been the only politely dissenting interlocutor in this interesting discussion thread. However, the discussion has been very positive. I've learned much from reading the posts here. Do they have this kind of serious technical discussions in other sites too? Just curious. PS. To all anonymous readers, please read the insightful explanation @95 very carefully. It's really fundamental and very juicy. :)Dionisio
December 8, 2016
December
12
Dec
8
08
2016
01:48 PM
1
01
48
PM
PDT
GP Correct - what I meant was "eliminate chance and necessity and you have design". So, we might say that ID is a probability measure that sets boundaries beyond which chance and necessity cannot produce the observed results.Silver Asiatic
December 8, 2016
December
12
Dec
8
08
2016
10:48 AM
10
10
48
AM
PDT
Dionisio: Good question! I like very much Abel's concept of prescriptive and descriptive information. I have no idea if the concept originates with him, or if he takes it from someone else. However, it is a great concept. I would say that descriptive information conveys a meaning, while prescriptive information implements a function. So, a sonnet is descriptive information, while a software or a machine is prescriptive information. There is no great difference from the point of view of complexity and design inference. Moreover, descriptive information can always be transformed into functional information by defining the function as "the ability to convey such a meaning", but I find the procedure a little artificial. However, I have used that approach in my OP about language. Of course, for design inference in biology we are mainly interested in prescriptive information. There is an important difference. While both types of information require a conscious observer to define the function or meaning that specify, descriptive information conveys its content only if and when there is another conscious observer at the receiving end, because only conscious observers can understand meaning. So, in the absence of conscious observers, a sonnet is "dormant" and does practically nothing. On the contrary, a machine can be built by a conscious designer, and then it will operate even in complete absence of any conscious observer: an enzyme is very active in catalyzing its reaction, even if nobody is aware of that. Of course, a conscious observer is always necessary to recognize what the machine has been doing: but there is a difference, because the machine changes things objectively, while a sonnet changes things only when its meaning is understood. So, in a sense, we could say that prescriptive information is more "objective" than descriptive information.gpuccio
December 8, 2016
December
12
Dec
8
08
2016
10:04 AM
10
10
04
AM
PDT
Silver Asiatic at #91: "Yes, but knowing that, there is no real need for measures of dFCI. We observe something that is impossible for necessity and chance to produce, thus infer design. To then ask “why did we infer design?”, we wouldn’t say because it has more than 500 bits of dFCI." No, why do you say that? Remember, what we are discussing here is big systems with many random events occurring, like genomes that change through billions of years. Even if we are certain that no law of necessity can generate functional proteins from nucleotide sequences that are subject to random variation, we still have to exclude contingency. As I have said, the biological probabilistic resources of our planet and natural history can grossly be set at about 120 bits (IOWs, about 2^120 different configurations can be tested). So, if a very simple proteins has, say, 30 bits of functional information, how can we exclude that it came into existence by random mutations? We can't. It's the same reason why we cannot infer design for the sequence "word" found among 400000 randomly generated 4 character sequences. It could well be the result of contingency. Not so a Shakespeare's sonnet: if we find such a sonnet among 400000 randomly generated sequences of characters of the same lengths, we can safely infer design for it. And if someone asks: "“why did you infer design?", I would definitely answer: "because it has more than 500 bits of dFSI." Because it's the truth! When we immediately recognize something as certainly designed, without any doubt, even if we have no direct knowledge of its origin, that's the reason: we know that it is complex enough to be beyond contingency, and that no law of necessity can be related to that kind of thing. Even if we don't compute the dFSI, we are just the same understanding that it is extremely high. Of course, if we do science, we have to formally specify what usually is only an intuition. Therefore, if we do science, we have to measure dFSI, to decide a threshold, and so on. We have to make our simple intuition quantitative and shareable. That's what ID theory is about: it demonstrates that there is an objective, shareable way to infer design scientifically.gpuccio
December 8, 2016
December
12
Dec
8
08
2016
09:51 AM
9
09
51
AM
PDT
gpuccio: In this relatively old paper by D.L. Abel: http://www.mdpi.com/2075-1729/2/1/106/htm they mention "Prescriptive Information" (PI). How does PI relate to dFSCI and CSI?Dionisio
December 8, 2016
December
12
Dec
8
08
2016
09:45 AM
9
09
45
AM
PDT
Silver Asiatic: "The same with a snowflake. If a certain ice flow was studied for design, the natural process of snow becoming ice, (thus snowflakes) would have to be analyzed to see if they could form the structure in question." "In the example I gave, an ice sculpture, the snowflake has the function of “transporting water through the atmosphere”. The snowflake lands on things, has a ‘sticky’ quality, and can form various objects (snow drifts, etc) which melt and become sculpture-like objects." I am not sure that I understand what you mean here. Are you defining a function for a snowflake? For snow? Are you trying to define a search space and a target space? What is the system, what is exactly the object? Let's go to the Pollock example: "In a Jackson Pollack type of painting, there are paint drops – blotches, on a surface. You go somewhere – an old garage – and see a surface with paint blotches. Was it designed or just a random accident? I would think the paint would have some functional definition. Something to put in the equation." If we are discussing abstract, informal paintings, I think that it is very difficult to distinguish them from random images, and therefore infer design, scientifically. I am not criticizing that kind of art! :) Indeed, I like very much Pollock and many other painters of the same kind. The point is, when we infer design we need some objectively defined function and a way to measure it. The beauty in abstract paintings (or even in formal paintings) is certainly there, but I am not aware of any explicit and objective way to define it, least of all to measure it. I am certain that beauty is one of the properties of good design, but unfortunately it is at present too elusive for scientific definitions. So, a painting that is beautiful, but does not represent anything formally recognizable, like this: http://www.jackson-pollock.org/images/paintings/convergence.jpg is a beautiful thing, but not a good object for which we can scientifically infer design. From that point of view, I am afraid it would remain a false negative. On the contrary, a painting like this: http://www.minhanhart.com/upload/product/86205824640.jpg can be easily recognized as designed, for the precision with which it reproduces known objects by oil colors. Just look at these four drawings: http://hyperallergic.com/wp-content/uploads/2014/08/Childrens-Drawings_Kings-College-London.jpg All of them are designed by children, but only the last one, maybe, could warrant some design inference. Perhaps.gpuccio
December 8, 2016
December
12
Dec
8
08
2016
09:36 AM
9
09
36
AM
PDT
GP
The point is, when you have to write a computer program to do something, or to find a protein sequence that can work as an enzyme that you need, you cannot hope that some law of necessity does that for you. It’s impossible.
Yes, but knowing that, there is no real need for measures of dFCI. We observe something that is impossible for necessity and chance to produce, thus infer design. To then ask "why did we infer design?", we wouldn't say because it has more than 500 bits of dFCI.Silver Asiatic
December 8, 2016
December
12
Dec
8
08
2016
09:31 AM
9
09
31
AM
PDT
gpuccio @88: You mentioned Abel's “configurable switches” concept, which I don't recall seeing before, though it was posted in this site 5 years ago: https://uncommondescent.com/design-inference/the-first-gene-the-cybernetic-cut-and-configurable-switch-bridge/ Very interesting indeed. Thank you.Dionisio
December 8, 2016
December
12
Dec
8
08
2016
09:15 AM
9
09
15
AM
PDT
gpuccio @86:
A high threshold, like 500 bits, is linked to many false negatives, but guarantees practically zero false positives.
I had it totally wrong. Thank you for correcting my misunderstanding.Dionisio
December 8, 2016
December
12
Dec
8
08
2016
08:29 AM
8
08
29
AM
PDT
Origenes and Silver Asiatic: When Dembski developed his model of the explanatory filter, he was well aware that some forms of specification could generate confusion. Let's restate for a moment my general definition of "specification": Specification is any rule that generates a binary partition in a well defined set of objects (the search space), so that a subset of the search space can be identified according to that rule (the target space). OK, that is specification in general. The information measured by comparing the target space to the search space is called CSI. But it is possible to define a more specific kind of specification, a subset of possible specifications: A functional specification is a rule that generates a binary partition in a well defined set of objects (the search space), so that a subset of the search space can be identified according to that rule (the target space). The rule must be the definition of a function to be implemented, explicitly defined, including a definite level of the function itself and a method to measure the presence or absence of the function in objects (IOWs, the ability of the object to implement the function at the defined level). OK, that is functional specification. The information measured by comparing the target space to the search space is called FSI. dFSI if we stick to digital forms of information. dFSCI if we express it in binary form (yes or no). Now, I believe that if we use only functional specification, the problem of laws of necessity will not arise. Indeed, laws of necessity cannot really generate high levels of functional specification. That's why there is no law of necessity that can generate language or software, or semiotic codes, or paintings, and so on. For the same reason, we cannot define any function for snowflakes that has high complexity. But functional specification is not the only way to generate a binary partition. There are other kinds of specification. For example, pre-specification of some specific sequence is a form of specification too. But the main kind of specification that is different from functional specification is specification based on order and regularities. That's where the problem of necessity laws becomes important. Let's see a very simple example, that has been discussed many times in this blog. We have a system where a coin is tossed 10000 times, and the results are recorded. With some surprise, we observe that the result is 10000 heads. Our null hypothesis is that the coin is fair, so we should have approximately 50% of heads and 50% of tails. So, we reject the null hypothesis that the result is consistent with a random system with uniform distribution of the probabilities, because in that case the result we observe would be too unlikely (definitely beyond any threshold). But, as we know from the theory of hypothesis testing, rejecting the null hypothesis does not automatically support a definite explanation. So, we must look at all possible alternative explanations. Let's say that, after serious reasoning, we conclude that only two explanations are credible: 1) The coin is fair, but in some way the result has been intentionally manipulated by someone (for example, by some magnetic field that is activated on purpose to force the coin each time to a head result). IOWs, we infer design for the result, even if it is probably deceitful design. 2) The coin is not fair, for example for some error in building it, so that when we toss it the laws of gravity make it fall always in the head position. In this case, there is no intentional design in the result. So, before we infer that the result is faked, we have to be sure that the coin is fair. IOWs, we must exclude that the observed result is caused by known laws of necessity operating in the system. Dembski was well aware of that problem, and that's why in the explanatory filter, often quoted here by KF, we have not only to demonstrate that the observed result is too unlikely as a result of contingency, but also that it is not the result of laws of necessity. So, in my definition I am only restating Dembski's basic and brilliant intuitions. The only difference is: the problem of necessity really arises only if we use order as a way to specify. Why? Because order and regularities can have a double origin: while they are never the result of contingency, at least at high levels of order and regularity (contingency inevitably tends to disorder), they can be the result of intentionality (design) or of some law of necessity. So, if we observe order, and we think that it could be the result of intentionality (design), we must be sure that we have done all that is possible to exclude a regularity generated by known laws operating in the system. So, considering possible explanations based on necessity remains an important methodological step in any design inference. That has nothing to do with measuring CSI: in the case of the coin, we can measure the improbability of the result, as the ratio of the target space to the search space, and that improbability is extremely high, of the order of 10000 bits. That certainly excludes contingency as an explanation. But, if the result can be explained by an unfair coin that can only fall in the head position, what is the utility of excluding contingency? None at all. But the important point is: there must be some law of necessity which explains, or at least has the potential to explain, the observed result. In the case of software, proteins, and other forms of digital prescriptive information, and also in digital descriptive information like language, such laws do not exist. The point is, when you have to write a computer program to do something, or to find a protein sequence that can work as an enzyme that you need, you cannot hope that some law of necessity does that for you. It's impossible. That kind of information, functional information, is the result of a great number of intentional choices. The information is only the sum of a number of what Abel calls "configurable switches": objects that can exist indifferently in at least two different configurations, so that the specific configuration can be set by the designer to get a final result. Laws of necessity cannot connect some configuration to a complex functional result, because the functional configuration is not based on regularity. Indeed, it is based on understanding the needs generated by the function to be implemented. That's why functional sequences are "pseudo-random" (even if, of course, they can imply some low level of regularity). So, when we deal with functional information, it's usually very easy to exclude an explanation based on laws of necessity: if the information that we observe is not defined by regularities, an explanation based on necessity is simply impossible. However, if we become aware of some credible explanation of that kind, we have always the duty to carefully consider it. Science is the field of best explanations and of inferences, not of absolute truths. Going back to our snowflake, we can say that there are two different reasons why we cannot infer design for it: 1) We can define no complex function for it, so no inference based on dFSCI is possible. 2) Even if we try to specify the snowflake for its regularities, there are well understood explanations for those regularities, based on well known laws operating in the system where snowflakes are generated. So, no design inference is possible, even if using a more general concept of CSI. Again, these concepts are already fully explicit in Dembski's explanatory filter. I have only commented them from the particular perspective of functional specification.gpuccio
December 8, 2016
December
12
Dec
8
08
2016
08:16 AM
8
08
16
AM
PDT
Origenes
Again, I do not understand why ‘being understood from law and randomness’ is relevant. If law and randomness creates stuff which contains over 500 bits of dFSCI, like Shakespearean sonnets, clocks, computers and so forth, and it is well understood, then there is something very wrong with the design inference. You cannot say: X may contain well over 500 bits of information, but I refuse to infer design for X because its origin is well understood by law and randomness. That’s discrimination!
Yes, that's the point I was try to get at also.Silver Asiatic
December 8, 2016
December
12
Dec
8
08
2016
05:16 AM
5
05
16
AM
PDT
Dionisio: No, the point is exactly the opposite. A high threshold, like 500 bits, is linked to many false negatives, but guarantees practically zero false positives. I am going to answer Origene's points as soon as I find a few minutes...gpuccio
December 8, 2016
December
12
Dec
8
08
2016
05:10 AM
5
05
10
AM
PDT
Origenes
According to GPuccio’s method, in order to test a snowflake for design we need to come up with a function for the snowflake — as in: the function of a snowflake is ….. The weary function “paper weight” is out, I suppose. Do you have a suggestion? A snowflake supports which top-level function?
In the example I gave, an ice sculpture, the snowflake has the function of "transporting water through the atmosphere". The snowflake lands on things, has a 'sticky' quality, and can form various objects (snow drifts, etc) which melt and become sculpture-like objects.
Ink-molecules are small things that contribute to letters, which in turn contribute to words, which in turn contribute to sentences and so forth. We see a complete alignment of low-level functions in support of a top-level function (the expression of a thought).
True. If you saw something that looked like a single word, or perhaps even two words - from ink on a surface, you could analyse it to determine if it was designed or just an ink spill. The other example I gave was more difficult. In a Jackson Pollack type of painting, there are paint drops - blotches, on a surface. You go somewhere - an old garage - and see a surface with paint blotches. Was it designed or just a random accident? I would think the paint would have some functional definition. Something to put in the equation.Silver Asiatic
December 8, 2016
December
12
Dec
8
08
2016
05:09 AM
5
05
09
AM
PDT
Origenes @81: Let's wait for gpuccio to answer your questions, but perhaps what you quoted has to do with something else gpuccio wrote before about the 500-bit (or his own 150-bit) threshold allowing for false positives, but no false negatives? (assuming I got that right?) It seems like you're referring to a "false positive" case? Let's see what gpuccio has to say about this. What would be a "false positive" in this case? Over the 500-bit (or GP's 150-bit) threshold but not qualifying as "designed"? What would be a "false negative" in this case? Under GP's 150-bit limit but still considered "designed"? Can someone verify this for me? Thank you.Dionisio
December 8, 2016
December
12
Dec
8
08
2016
05:02 AM
5
05
02
AM
PDT
gpuccio: Excellent! Thank you.Dionisio
December 8, 2016
December
12
Dec
8
08
2016
02:57 AM
2
02
57
AM
PDT
Dionisio: "The beautiful snowflake shapes seem to be a byproduct of physical processes. One could argue that the laws of physics were designed, but not the snowflake shapes" Exactly! "However, can we infer design for the protein Ndrg4 referenced in this paper?" Well, let's try to apply my simple homology based method, and to follow the "information trail". 1) Ndrg4 is a protein 352 AAs long in the human form. 2) In vertebrates, it shows very high conservation: 645 bits of hmology between cartilaginous fishes (callorhincus milii) and humans. That, in itself, is well above the 500 bits threshold. 3) Its "information trail" shows also an important information jump between pre-vertebrates and vertebrates: 399 bits of difference in homology to the human form, between the best non vertebrate hit (Crassostrea gigas, 246 bits, and callorhincus milii, 645 bits). That means that about 400 bits of new original functional information have been generated in the protein in its vertebrate form. That is not above the 500 bit threshold, but is well above the threshold that I have suggested for any biological event, that is 150 bits (see my post #72). Therefore, applying the simple methods that I have described in my OPs and in my posts here, I would definitely infer design for the Ndrg4 protein, in particular in its vertebrate form.gpuccio
December 8, 2016
December
12
Dec
8
08
2016
02:52 AM
2
02
52
AM
PDT
GPuccio: Just to be clear: A snowflake is not an object for which we can infer design.
GPuccio, in another thread you offer two distinct reasons as to why some things do not allow a design inference:
GPuccio: So, I mention 4 objects or systems which do not allow a design inference because they do not exhibit dFSCI: The disposition of the sand in a beach: easily explained as random, no special function observable which requires a highly specific configuration. Can you suggest any? I can’t see any, therefore I do not infer design. The pattern of the drops of rain. Same as before. [my emphasis]
IOWs the disposition of the sand at a beach and the pattern of rain drops do not contribute to some top-level function. Both are not part of a functional system. They are not relative to a function, like gears are to a clock-function or letters are to the function of expressing a thought. Okay that’s a clear reason.
GPuccio: A glacier: this is less random, but it can be easily explained, as far as I know, by well understood natural laws, with some random components. I am not an expert, but I cannot see in a glacier any special configuration which has a highly specific complexity which is not dependent on well understood natural laws. Therefore, I do not infer design.
How is being understood by natural laws relevant?
GPuccio: The snowflake I have added because it is an example of ordered pattern which could suggest design, but again the configuration is algorithmic, and its origin from law and randomness very well understood. No dFSCI here, too.
Again, I do not understand why ‘being understood from law and randomness’ is relevant. If law and randomness creates stuff which contains over 500 bits of dFSCI, like Shakespearean sonnets, clocks, computers and so forth, and it is well understood, then there is something very wrong with the design inference. You cannot say: X may contain well over 500 bits of information, but I refuse to infer design for X because its origin is well understood by law and randomness. That’s discrimination! :)Origenes
December 8, 2016
December
12
Dec
8
08
2016
02:39 AM
2
02
39
AM
PDT
gpuccio, The beautiful snowflake shapes seem to be a byproduct of physical processes. One could argue that the laws of physics were designed, but not the snowflake shapes. :) However, can we infer design for the protein Ndrg4 referenced in this paper?
Neuronal Ndrg4 Is Essential for Nodes of Ranvier Organization in Zebrafish Laura Fontenas, Flavia De Santis, Vincenzo Di Donato, Cindy Degerny, Béatrice Chambraud, Filippo Del Bene, Marcel Tawk http://dx.doi.org/10.1371/journal.pgen.1006459 PLOS Genetics
Dionisio
December 8, 2016
December
12
Dec
8
08
2016
12:51 AM
12
12
51
AM
PDT
Just to be clear: A snowflake is not an object for which we can infer design.gpuccio
December 7, 2016
December
12
Dec
7
07
2016
11:27 PM
11
11
27
PM
PDT
Silver Asiatic: … in a biological system, many small things contribute to a bigger function. So, if the small things can be explained by non-design, then the bigger function can be, supposedly, also.
Ink-molecules are small things that contribute to letters, which in turn contribute to words, which in turn contribute to sentences and so forth. We see a complete alignment of low-level functions in support of a top-level function (the expression of a thought). However, the notion that ink-molecules are non-designed doesn’t validly lead us to the conclusion that a sonnet by Shakespeare is also non-designed.
Silver Asiatic: The same with a snowflake. If a certain ice flow was studied for design, the natural process of snow becoming ice, (thus snowflakes) would have to be analyzed to see if they could form the structure in question.
According to GPuccio’s method, in order to test a snowflake for design we need to come up with a function for the snowflake — as in: the function of a snowflake is ….. The weary function “paper weight” is out, I suppose. Do you have a suggestion? A snowflake supports which top-level function?
Silver Asiatic: So, the snowflake would have a functional attribute.
Which one?Origenes
December 7, 2016
December
12
Dec
7
07
2016
05:25 PM
5
05
25
PM
PDT
Origenes, I think ... and I may be totally wrong, in a biological system, many small things contribute to a bigger function. So, if the small things can be explained by non-design, then the bigger function can be, supposedly, also. The same with a snowflake. If a certain ice flow was studied for design, the natural process of snow becoming ice, (thus snowflakes) would have to be analyzed to see if they could form the structure in question. So, the snowflake would have a functional attribute. As for the question of "non-designed", if the thing can be formed by natural processes alone, then it doesn't fit the design criteria. We might say "it appears not to have been designed". If the thing cannot be formed by natural processes, we still don't necessarily know that it was designed, but design is a better explanation, since we know design could do it, and we have not seen that natural processes could. That's by best guess at an answer anyway!Silver Asiatic
December 7, 2016
December
12
Dec
7
07
2016
02:10 PM
2
02
10
PM
PDT
Is it just me or is it hard to come up with a function for a snowflake?Origenes
December 7, 2016
December
12
Dec
7
07
2016
01:47 PM
1
01
47
PM
PDT
Oops: "Do you think a snowflake is "not designed"?" SorryPaV
December 7, 2016
December
12
Dec
7
07
2016
11:30 AM
11
11
30
AM
PDT
1 3 4 5 6 7 8

Leave a Reply