Uncommon Descent Serving The Intelligent Design Community

Fixing a Confusion

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I have often noticed something of a confusion on one of the major points of the Intelligent Design movement – whether or not the design inference is primarily based on the failure of Darwinism and/or mechanism.

This is expressed in a recent thread by a commenter saying, “The arguments for this view [Intelligent Design] are largely based on the improbability of other mechanisms (e.g. evolution) producing the world we observe.” I’m not going to name the commenter because this is a common confusion that a lot of people have.

The reason for this is largely historical. It used to be that the arguments for design were very plain. Biology proceeded according to a holistic plan both in the organism and the environment. This plan indicated a clear teleology – that the organism did things that were *for* something. These organisms exhibited a unity of being. This is evidence of design. It has no reference to probabilities or improbabilities of any mechanism. It is just evidence on its own.

Then, in the 19th century, Darwin suggested that there was another possibility for the reason for this cohesion – natural selection. Unity of plan and teleological design, according to Darwin, could also happen due to selection.

Thus, the original argument is:

X, Y, and Z indicate design

Darwin’s argument is:

X, Y, and Z could also indicate natural selection

So, therefore, we simply show that Darwin is wrong in this assertion. If Darwin is wrong, then the original evidence for design (which was not based on any probability) goes back to being evidence for design. The only reason for probabilities in the modern design argument is because Darwinites have said, “you can get that without design”, so we modeled NotDesign as well, to show that it can’t be done that way.

So, the *only* reason we are talking about probabilities is to answer an objection. The original evidence *remains* the primary evidence that it was based on. Answering the objection simply removes the objection.

As a case in point, CSI is based on the fact that designed things have a holistic unity. Thus, they follow a specification that is simpler than their overall arrangement. CSI is the quest to quantify this point. It does involve a chance rejection region as well, but the main point is that the design must operate on principles simpler than their realization (which provides the reduced Kolmogorov complexity for the specificational complexity).

Comments
Bob O'H:
If it’s so easy from your side, the just do it! You don’t need me: if you can demonstrate that your design detector works on a wide range of objects, you’ll make a lot of people happy, including those in the ID community.
Bob, for some reason you don't seem to want to cooperate here. The impression I have is that you feel that in stating something is "not-designed," you will have given away the store. I'm just trying to get a starting point. With that said, however, I have an idea of what kind of object you may select, and I think I have an argument in favor of ID. Actually I presented the argument over two years ago, and I plan to link to it: that's what makes it so easy! But if you're really interested in ID coming up with an argument that is effective, then I don't think I'm asking too much by simply asking you to name something that is 'not-designed.' For example: do you think a snowflake is "designed"?PaV
December 7, 2016
December
12
Dec
7
07
2016
10:37 AM
10
10
37
AM
PDT
gpuccio @72: Thank you for such a detailed and well illustrated explanation. It's another "textbook" material for future reference. Your examples made it easier to understand this otherwise difficult subject. By now Bob O'H should have understood exactly what you meant, and maybe even agreed with you. Maybe... :)Dionisio
December 7, 2016
December
12
Dec
7
07
2016
09:34 AM
9
09
34
AM
PDT
Bob O'H: Sorry to steal your time in the midst of important personal duties! :) You raise very good points. Of course we can make the definition less specific. That will usually lower the value of dFSI linked to the definition, because the target space becomes bigger. That's not a problem, because what we need to infer design is at least one function that can be defined so that its specific functional information is higher than, say, 500 bits, and that is implemented by the observed object. It's not important if we can define less specific functions, that may or may not have dFSI values above the threshold. For example, in my OP about functional information, I make the example of a notebook, which can be used both as a paperweight (a very simple function, with very low dFSI), and as a notebook (a very complex function, certainly with dFSI value well above 500 bits). Of course, we can infer design for the notebook using the more complex functional definition. In the case of ATP synthase, why define a protein that can build ATP from a proton gradient? Well, because the protein we observe can do exactly that! Of course, it can be possible to synthesize ATP in other ways. So, if we define a protein that can synthesize ATP, the target space will be bigger, and the dFSI lower. But what is needed in the cell, in that specific context, is a protein that transforms energy from a proton gradient to ATP. That functional need is the basis for a highly sophisticated engineering of the protein, and for very specific solutions. Let's say that we can make a car with a petrol engine and with an electric engine. The two machines will be different, at least in many engine parts. But of course, the functional solutions in a petrol engine are absolutely necessary to the working of the petrol engine. They are part of the design of the engine. So, we can well define the function of the engine as "a machine that can draw mechanical energy from petrol, and use it to operate a car", even if other engineering solutions exist. The functional complexity necessary to derive energy from petrol will be a very good result to infer design for the petrol engine. So, the idea is that we can define the most complex function that can be specifically implemented by the object. The important point is that the function definition is really independent from the specific information in the object (for example the specific digital sequence that implements the function). The definition of the function is similar to a question that we ask, and to which some engineer must provide the answer. So, in the case of ATP synthase, the question could be: Well, Mr. engineer, here we have a lot of energy in the form of proton gradient. But what we really need is energy in the form of ATP molecules, because that is a form of energy that we can use where we like, and as we like. So, what can you do to give us ATP from the proton gradient, possibly with good speed and efficiency? Well, ATP synthase is exactly the answer. A very good answer. Let's go to the problem of sensitivity - specificity tradeoff. Well, I would say that we are in a very special situation here, one that makes the choices about that classical tradeoff very easy: we need specificity, and we can happily renounce sensitivity. Why? Because our purpose, in the context of our debate, is to identify at least some biological objects that are unequivocally designed. Not to identify most of the biological objects that are designed. That's why the threshold must be set very high. 500 bits is the UPB. For biological objects, I have suggested 150 bits. That threshold can still guarantee about 5 sigma, if we take into account the probabilistic resources of the biological world on our planet, which can be grossly evaluated at 120 bits. 5 sigma is about 22 bits. So, 150 bits is a rather safe threshold for the biological world. But even a very restrictive threshold, such as 1000 bits, would still allow us to infer design for many biological objects, first of all the alpha and beta chains of ATP synthase (at least 1200 bits of functional information). It's interesting to note that we can never eliminate false negatives in the design inference. As I have said, there are many designed things that are really simple, and they can be indistinguishable from non designed things that can implement the same simple functional definition. So, there will always be designed objects for which we can never infer design. But we can reduce false positives to a practical zero, if we set the threshold high enough. So, if we infer design only if we can compute a dFSI of at least 1000 bits, for example, we can be really sure that we will not have false positives. Let's go to my example of the paragraph. You are right, that was just a quick example where I have considered the target space as made of one sequence. In some cases, that can be appropriate. For example, in my mental experiment of the carving of the wall, the target space is made of one, because only one sequence corresponds to the exact decimal sequence of pi. But in most real cases, the target space is bigger. And measuring the target space is the real difficult point in evaluating dFSI. But in many cases, it can be done, usually by approximation. I invite you to read my OP about language, here: https://uncommondescent.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/ where I reach the interesting result that a Shakespeare sonnet exhibits at least 800+ bits of specific functional information, using the very conservative definition: "Any sequence of characters of about the same length that is made only of english words" You may notice that such a definition includes no requirements that the sequence must have some good meaning in English, least of all that it must be poetry, or a sonnet, least of all that it must be one of Shakespeare's sonnets, least of all that it must be that specific sonnet. But, even so, 800 bits of dFSI are guaranteed, and a design inference can be safely made. In the case of protein chains, the approximation that I use is based on the concepts expressed by Durston in his fundamental paper: https://tbiomed.biomedcentral.com/articles/10.1186/1742-4682-4-47 I usually refer to the homology measured in bits by the BLAST software in the comparison between two evolutionary distant sequences of the same protein, as a credible measure of its minimum dFSI. For example, in the case of the two mentioned chains of ATP synthase, I have given an approximate value of functional information (for both) of about 1219 bits. Please note that the complexity of the search space, for 973 AAs, is about 4205 bits. That means that my computation is setting the target space at about 2986 bits, which, IMO, is probably a gross overestimation. But, just to be on the safe side... So, ID definitely cares for the size of the target space. That concept has always been very clear, just form the first definitions of CSI by Dembski. ID cares for both the size of the target space and the probabilistic resources of the system. And ID reasonings very often approximate these things extremely, against the interests of the theory itself. Why? Because, even with such a high level of self-harm, we can always infer design quite easily for many biological objects.gpuccio
December 7, 2016
December
12
Dec
7
07
2016
07:17 AM
7
07
17
AM
PDT
Bob O'H
The “replication” was referencing trying the design detection on “several designed and non-designed objects” (and in particular that it’s done on several).
That seems like a very good idea. Why not? Of course there will be tricky situations, with non-designed things looking designed or designed things measured as non-designed, but it certainly would be interesting to test. Some sort of blind-testing in a neutral lab environment would be a big help to ID, as I see it. We could measure effectiveness and then tweak the model to improve.Silver Asiatic
December 7, 2016
December
12
Dec
7
07
2016
06:03 AM
6
06
03
AM
PDT
Sorry for the patchy replies: I'm afraid the patchiness will probably increase, as I'm in the middle of relocating to Norway, so I have other things on my mind (like trying to move our flock). gpuccio @ 60 -
So, let’s say that our object is ATP synthase (please, if you want have a look at my post #55. which makes some important points). There, I have used the following definition in the discussion: a machine that generates high energy ATP molecules using the energy derived from a proton gradient, in an appropriate membrane system This is rather generic, but for most purposes it would be fine. It is certainly good enough to infer design for that molecule. I could make the definition more specific, by fixing a threshold of efficiency, like “at least 100 micromoles of ATP per second per mg protein”. That would probably include most instances of the molecule, maybe not all, but what’s the point?
You could also make the definition less specific - why a proton gradient? Why does it have to be in a membrane? I think you'll have these sorts of choice to make in almost every specification of a function. That's OK as long as the guidelines are clear (and you can always do sensitivity analyses to see if different definitions lead to different answers). But guidelines are important: it is not always obvious how to extrapolate from examples. One reason I'm pushing for you to present positive evidence that your design detector works, is that I'm sure that if you try it in practice, you'll find there are problems that need to be solved (don't worry, this is usual in development of methods). This leads me on to 68...
Let’s take for example the first paragraph in this post: “Functional information in itself is not necessarily linked to design. It only means that some object can be used for some function: the specific configuration in the object that allows to use it for that function is the functional information linked to that function.” that (I hope) conveys some definite meaning in (I hope) good english. The specific information for that specific sequence is about 300 bits. Believe me, no randomly generated series of sequences will ever include that outcome.
How did you specify the function of that paragraph? And then how did you decide the size of the space that contained the specification? You seem to be specifying it by saying the paragraph has to be exactly the same (I may be wrong here, so my apologies if so). PaV @ 65 - If it's so easy from your side, the just do it! You don't need me: if you can demonstrate that your design detector works on a wide range of objects, you'll make a lot of people happy, including those in the ID community. The "replication" was referencing trying the design detection on "several designed and non-designed objects" (and in particular that it's done on several). My apologies if this was not clear.Bob O'H
December 7, 2016
December
12
Dec
7
07
2016
02:03 AM
2
02
03
AM
PDT
gpuccio: What I wrote @68 also applies to the comments @60, 61, 63, 64. Understanding requires the will to understand.Dionisio
December 6, 2016
December
12
Dec
6
06
2016
08:33 PM
8
08
33
PM
PDT
gpuccio: "that (I hope) conveys some definite meaning in (I hope) good english." The entire comment @67 conveys very important meaning in good language. Thank you. However, there's no guarantee that your politely dissenting interlocutors will understand it, much less accept it. Sorry.Dionisio
December 6, 2016
December
12
Dec
6
06
2016
08:10 PM
8
08
10
PM
PDT
Silver Asiatic: Functional information in itself is not necessarily linked to design. It only means that some object can be used for some function: the specific configuration in the object that allows to use it for that function is the functional information linked to that function. For example, a stone can be used as a paper weight if it is in some range of weight, form, and so on. In that kind of function, however, the specific information necessary to implement the function is always relatively low. IOWs, in random systems many simple configurations exist that can implement simple functions. The reason is simple: if only a few bits of information are necessary to implement a function, it will be easy to find those configurations in the search space. Another way to say that is that the target space is big enough, in relation to the search space, to be found by a random search. For example, short English words will be found easily in random sequences. Do you think that it is so difficult to find the sequence "word" in some set of randomly generated four letter sequences? There is about 1:390000 probabilities to find it, less than 19 bits. In a system with enough probabilistic resources, that is a quite likely outcome. So, if we found the sequence "word" in a series of 400000 randomly generated 4 letter sequences, that is no evidence that someone designed (wrote) that word. The sequence has almost 19 bits of functional information, but it was not designed by anyone. IOWs, a threshold of 19 bits is not appropriate to infer design in a system. But what if the specific information is higher? Let's take for example the first paragraph in this post: "Functional information in itself is not necessarily linked to design. It only means that some object can be used for some function: the specific configuration in the object that allows to use it for that function is the functional information linked to that function." that (I hope) conveys some definite meaning in (I hope) good english. The specific information for that specific sequence is about 300 bits. Believe me, no randomly generated series of sequences will ever include that outcome. If the paragraph had been just a little longer, we would have been beyond the 500 bit threshold. The design inference is really safe, in that case, even in systems with very high probabilistic resources. The alpha and beta chains of ATP synthase are at least in the range of 1200 bits. So, the important points are: 1) Its is not functional information in itself that is linked to design, but rather complex functional information. Complex functions are only observed in designed objects. Simple functions can be found in many non designed objects. 2) There is no circularity at all, if we refer to my independent definition of design. Remember, an object is designed if and only if it comes from a design process, where a conscious intelligent agent outputs the information to a material object from a conscious cognitive representation and a conscious purpose. This is the only possible definition of design, and it destroys all circularities, because it is completely independent. 3) Designed objects can be simple or complex. IOWs, they can be used to implement both simple and complex functions. 4) Non designed objects can be used to implement simple functions. 5) Beyond some appropriate threshold of specific functional information, only designed objects are associated to complex functions. 6) That's why complex functional information can be used to infer design: because it is an empiric indicator of the design origin of an object. Beyond some threshold (500 bits is an appropriate general threshold) dFSCI can infer design with 100% specificity, no false positives, and many false negatives. 7) dFSCI can be used to infer design for any object that exhibits it above an appropriate threshold. It can be used to infer design for human artifacts and for biological objects, which are at present the only two known categories that exhibit dFSCI abundantly and beyond any doubt. If we find artifacts on planets that exhibit dFSCI, it will be safe to infer design for them. 8) dFSI at low levels cannot be used to infer design, because it is not necessarily an indicator of a design process.gpuccio
December 6, 2016
December
12
Dec
6
06
2016
02:44 PM
2
02
44
PM
PDT
GP
No special functional information here.
You clarified this later, but I wonder how Bob O'H would respond. My fear was that this was too close to circularity. Again, the task was not really about identifying design, but rather defining it. So, if we observe a function and then decide that it's highly probable that it was non-designed, then it doesn't have any special functional information? This would mean that functional information was defined by whether it was probable that the thing was non-designed or not. That's the circularity. You said before:
I state again my definition: any observer can define any function. If an object is observed that can implement a function that requires at least 500 bits of information, then we can infer design for the object.
So, I defined a function with the mountain, but you then said that there was no information in that function. The reason for that is because it's probable that the rocks (becoming a barrier) was formed by nature. But what about rocks that seem to be more specified? Or something like a log-jam in the river versus a beaver-dam. Are those measurable by functional information? It also seemed like you were saying that the only real dFSI that we have is from that functional processes in micro-biology. Would ID research be limited to that scope?Silver Asiatic
December 6, 2016
December
12
Dec
6
06
2016
12:32 PM
12
12
32
PM
PDT
Bob O'H @ 57: Bob, let me assure you: the work from our side is easy. But I'm not going to do anything until you and me agree on what is "non-designed." So, please, give just one example. A "rock" will do. Just something. And how much "work" is involved in naming an object? So I don't think you can refuse here. I don't understand what you mean here: . . . the replication is important, because you are claiming to have a general method. I don't need anything elaborate; just give me some sense of how you mean "replication," and whether or not you see this "replication" as being part of a "non-designed" object.PaV
December 6, 2016
December
12
Dec
6
06
2016
09:57 AM
9
09
57
AM
PDT
Silver Asiatic: Just to be more clear: There is no special functional information in the formation of mountains, as there is no special functional information in the formation of chemical molecules that are normally formed on a planet like ours. No so for proteins. Proteins are not molecules that form spontaneously. They require special systems, biological systems, enzymes, and so on. Now, even assuming that aminoacids can form spontaneously in relevant quantities, and that they can form proteins spontaneously in some environments, what we could expect is the presence of proteins formed by a random sequence of aminoacids, according to the laws of probability. That's where the concept of functional specification becomes important. An object like ATP synthase, formed by at least 2000 AAs, has a very strange property: set in a membrane system, it works as a splendid mill-like machine, and it uses the energy from the movement of protons through a very specific channel in the molecule to activate a rotor which deform a very big part of the molecule where sites for ADP are present so that ADP is forcibly joined to phosphate to generate ATP, a very special high energy molecule. Now, that's not exactly what any random protein sequence can do. So, it's perfectly right to wonder how such an object originated. No known biochemical laws can favor its origin, even in an environment where proteins can be generated and can randomly change. The high conservation of the alpha and beta chains throughout natural history tells us that those sequences are highly specific, subject to very strong functional constraint: IOWs, they cannot change much, if the function has to be retained. The value in bits that I have given in my previous post, derived form a blast homology, is of 1200+ bits, and can be considered a credible approximation of a lower threshold for the real dFSI value of those two chains. That value in incredibly higher than the 500 bit threshold in Dembski's UPB. Must I still reiterate the obvious?gpuccio
December 6, 2016
December
12
Dec
6
06
2016
08:59 AM
8
08
59
AM
PDT
Silver Asiatic: Well, that's why usually I don't deal with analogic information: the computation is more difficult. However, we always have to define a real system, and a time window, and the object we are analyzing. As I see it, the correct question here is: given what we know of the forces actin on our planet during its existence, what is the probability that some object originates from which solid parts can slide so that they can divert a river? Of course, that probability is very high, because many of the objects that can originate during the processes that took place on our planet because of geological laws can have that property and implement that function. No special configuration is needed, other than being solid, being big, being subject to slides. Like all mountains in the world. No special functional information here.gpuccio
December 6, 2016
December
12
Dec
6
06
2016
08:33 AM
8
08
33
AM
PDT
GP Thanks again. So back to Bob O'H's "mountain". We observe the mountain. As observers, we define the function: "the mountain's function is when rocks slide they land at the bottom and divert the river". We note the function is already successful. We predict more rocks will fall and the water will move. Now, we determine that this function contains more than 500 bits of information. So, we conclude that the function is designed? The answer is "no". Because we're missing the "S" in the equation. There has to be specificity and rocks piled up at the bottom of the mountain are not specified information.Silver Asiatic
December 6, 2016
December
12
Dec
6
06
2016
06:47 AM
6
06
47
AM
PDT
Silver Asiatic: As this seems to be my terminology (of which I take full responsibility), I would like to clarify: dFSI is the digital functionally specified information linked to a function (that is usually implemented in an observed object). It is a continuous variable, because it is measured as a number: it is equal to -log2 of the target space / search space ratio. dFSCI is a binary transformation of the above measure according to a pre-defined threshold. So, if we use the 500 bit threshold, all objects exhibiting more than 500 bits of dFSI will be said to exhibit dFSCI. IOWs, dFSCI is a binary value (yes ot not). I hope that helps. :)gpuccio
December 6, 2016
December
12
Dec
6
06
2016
06:30 AM
6
06
30
AM
PDT
Bob O'H: "what you write helps a bit," I am happy of that! "but I don’t think it’s satisfactory." I would never have dreamed of that. :) "Even if I define the function without reference to what is measured, I can still (apparently) define the function to whatever level of specificity I want. So I could pile on specifications and force dFSI to be large enough. Or I could be vaguer, and lower it." No. That's not how things work. I will try to explain. It is true that we can define the function at any level we like. But, of course, if our purpose is to infer design (or not) for an observed object, we must define the function so that our definition includes the object. So, let's say that our object is ATP synthase (please, if you want have a look at my post #55. which makes some important points). There, I have used the following definition in the discussion: a machine that generates high energy ATP molecules using the energy derived from a proton gradient, in an appropriate membrane system This is rather generic, but for most purposes it would be fine. It is certainly good enough to infer design for that molecule. I could make the definition more specific, by fixing a threshold of efficiency, like "at least 100 micromoles of ATP per second per mg protein". That would probably include most instances of the molecule, maybe not all, but what's the point? The essence of the function is to be able to synthesize ATP from a proton gradient, with good efficiency, and in some real cellular context. In any case, whatever the definition, we are defining a "higher tail": the set we are defining includes all objects that exhibit at least the level of function we have defined, or more. When we define a function, we generate a binary partition in the search space. The set defined by our function (the target space) must include the observed object, if we are reasoning about some specific observed object. The target space / search space ratio will then express the probability of getting a functional object by a random search of the search space, in one attempt (we are assuming operationally an uniform distribution of the probability to reach an object in the search space). Of course, if we have many attempts, we will consider that (IOWs, the probabilistic resources of the system). So, when we define a function so that our definition includes our object, we are simply reasoning as we do in classical hypothesis testing: we are computing the probability of getting at least the observed function, or higher, if we accept the null hypothesis that the object is the result of a random search. In design inference, our purpose is to infer design for the object (or not). So, all we need is a definition which includes the object and that can set the functional complexity of the defined function above our threshold (in a general case, 500 bits). Of course, if we cannot find any such definition, we cannot infer design. And that is exactly the point: you cannot find any such functional definition for a non designed object, like for example a randomly generated sequence of any kind. Try as much as you like: you will never succeed. On the contrary, it's extremely easy to find such a definition for a lot of designed objects, like pieces of language and software. And, of course, it's extremely easy to find such a definition for ATP synthase, and a lot of other biological objects and systems. Quod erat demonstrandum. Of course, I don't believe for a moment that you will think that this is satisfactory... :)gpuccio
December 6, 2016
December
12
Dec
6
06
2016
06:23 AM
6
06
23
AM
PDT
Bob
Even if I define the function without reference to what is measured, I can still (apparently) define the function to whatever level of specificity I want.
I know GP can answer this much better than I can, but the function has to be real and observable. Yes, you can pile on criteria for specification, but I don't think you can strip away specifications that define some complex functions at their very minimum level. It's something like irreducible complexity. We look at functions that are complex in their simplest form. There's no need to pile on specifications. ID looks at those examples as markers. Again, the intent of ID is not to classify all of reality into "Designed" and "Non-Designed" categories. Its claim is that "some things in nature" exhibit evidence of design. All it needs to show is that is true for some things, measured by their quantity of dFCSI. This is validated by testing the power of "non-intelligent" agents (randomness, natural processes) to produce the dFCSI that is observed in those instances.Silver Asiatic
December 6, 2016
December
12
Dec
6
06
2016
05:51 AM
5
05
51
AM
PDT
Bob
SA @ 46 – you seem to be suggesting that I want to specify my function so that dFSI is small enough for what I want. Isn’t this cheating? Surely I should be honest and define my function without regard to what my expected result would be.
I think your terminology (I know there are many versions) is missing something though. It's not merely dFSI, but rather dFCSI. It's the "C", that is necessary. Is your function complex? Well that's what you're looking for. A complex specified function. Now, you could seek the most ambiguous example - you could look for the gray areas. You could try to find the least complex function. But I'd call that cheating. If you're not willing to look at the most obvious cases first, then why not? We're looking for complex, specified function. Why not go to the most obvious observations of that? That's the starting point, not the most ambiguous and debatable observations. Agreed? As I said, evolution does the same thing. It doesn't hold up the most ambiguous results as examples. Instead, it looks for the strongest, most obvious results. Anti-evolutionists have to argue against those, not the ambiguous observations. The same is for you. Look at the most obvious examples of dFCSI first. Argue against those and not the borderline cases.Silver Asiatic
December 6, 2016
December
12
Dec
6
06
2016
05:41 AM
5
05
41
AM
PDT
SA @ 46 - you seem to be suggesting that I want to specify my function so that dFSI is small enough for what I want. Isn't this cheating? Surely I should be honest and define my function without regard to what my expected result would be. GPuccio @ 47 & 48 - what you write helps a bit, but I don't think it's satisfactory. Even if I define the function without reference to what is measured, I can still (apparently) define the function to whatever level of specificity I want. So I could pile on specifications and force dFSI to be large enough. Or I could be vaguer, and lower it. I can't see how what you write stops that. I find this troubling (it's similar to problems with systematics using morphological characters: the character scoring and choice of which characters to use are subjective, which lead to lots of arguments. It's one reason why DNA methods are preferred nowadays), because it looks easy to abuse. PaV @ 53 - if you have read this thread, you'll see that one argument I've made a few times is that I'm not doing your work for you. You (as a group, not necessarily you individually) should be able to find a way of testing your design inference on several designed and non-designed objects: the replication is important, because you are claiming to have a general method.Bob O'H
December 6, 2016
December
12
Dec
6
06
2016
04:08 AM
4
04
08
AM
PDT
GPuccio: The important point is: each level has its complexities, and as we try to get to the final functional result, the complexities increase exponentially, because the search space increases much more quickly than the target space.
Which shows that the search space for an entire organism is unfathomable huge. And on top of that: an organism, unlike a sonnet or a password, is constantly changing. Perhaps one could say that an organism is a collection of many many different functionally coherent structures which alternate over time. And if 'functionally coherent structures' is the target space, then an organism manages to find a multitude of targets in an unfathomable huge search space. What are the true odds?
... however many ways there may be of being alive, it is certain that there are vastly more ways of being dead ... [Dawkins]
Origenes
December 6, 2016
December
12
Dec
6
06
2016
02:04 AM
2
02
04
AM
PDT
Origenes: I agree. In the same way, the twenty aminoacids are used as letters to get the secondary structures that are used as blocks to get the tertiary and quaternary structures that allow a protein to implement its specific function. The important point is: each level has its complexities, and as we try to get to the final functional result, the complexities increase exponentially, because the search space increases much more quickly than the target space. That can be shown easily for language, as I have tried to do here: https://uncommondescent.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/ but the same is true for all forms of dFSI. Another important point is that the higher level function is there only when the whole object has been configured. It is not the "sum" of lower level functions, but rather a specific functional configuration of them. So, the properties of individual aminoacids are necessary to have ATP synthase, but they are not ATP synthase. And the properties of alpha helices and beta sheets are certainly necessary to have ATP synthase, but they are not ATP synthase. And the properties of each of the 8 (in the simplest form) subunits of ATP synthase, and of the 2 macro-subunits (F0 and F1), are certainly necessary to have ATP synthase, but they are not ATP synthase. The simple truth is: ATP synthase is an extremely complex machine that generates high energy ATP molecules using the energy derived from a proton gradient, in an appropriate membrane system. To be able to implement that function, which as you can see can be easily defined independently of any knowledge of the specific sequence or structure of the molecule, it requires an extremely complex organization. At digital level, that organization is written in the primary sequence of at least 8 molecules, for a sum total of 2082 AAs in E. coli. As I have discussed many times here, the alpha and beta chains alone, which are the most conserved, are formed in E. coli by 973 AAs, and 624 of them (64%) are perfectly conserved in humans. That represents a total BLAST score, at a simple blast comparison of the two chains, of 1219 bits. That functional information has been conserved from E. coli to humans, through billions of years of evolution. And ATP synthase is one of the oldest proteins we know. These are true examples of digital complex functional information in biology. There are tons of them.gpuccio
December 5, 2016
December
12
Dec
5
05
2016
11:46 PM
11
11
46
PM
PDT
GPuccio, thank you for the confirmation. Maybe terms like 'sub-function' or 'low-level-function' are apt to make this point. These terms make sense within the context of "functional coherence", which is defined by Douglas Axe as a “complete alignment of low-level functions in support of the top-level function.” As a familiar example for functional coherence Axe offers ‘alphabetic written languages’ which “use letters as the basic building blocks at the bottom level. These letters are arranged according to the conventions of spelling to form words one level up. To reach the next higher level, words are chosen for the purpose of expressing a thought and arranged according to grammatical conventions of sentence structure in order for that thought to be intelligibly conveyed." [Axe, "Undeniable", Ch.9]Origenes
December 5, 2016
December
12
Dec
5
05
2016
02:55 PM
2
02
55
PM
PDT
Bob O'H: I think we need to start at square one here. IOW, Bob, since you want someone to do a dFSI calculation for a "non-designed" object, you first need to give us an example of such an object. What do you have in mind? And, obviously, biological objects are the very objects in dispute here and, so, fall outside of what can be considered "non-designed." So, please, furnish us with an object.PaV
December 5, 2016
December
12
Dec
5
05
2016
02:46 PM
2
02
46
PM
PDT
Origenes: Exactly! :)gpuccio
December 5, 2016
December
12
Dec
5
05
2016
01:40 PM
1
01
40
PM
PDT
GPuccio, Do I understand you correctly?
GPuccio: There is only one important rule, which is a natural derivation of the definition. The function must not be “built” on a specific observed sequence.
So, one cannot pick a random stone, examine its properties, and next design and build a complex functional machine which depends on the stone's properties and claim that the stone contained FSI all along.
GPuccio: IOWs, we cannot use the information of an existing sequence to define the function, or to generate it (as in the case of the password).
Indeed. That would be in principle the same method : the function of a sequence (or the properties of a stone) is designed after it has been observed. IOWs function must already be present — it must be part (sub-function) of a functional whole — and not be added at a later date.Origenes
December 5, 2016
December
12
Dec
5
05
2016
01:33 PM
1
01
33
PM
PDT
Silver Asiatic: Exactly! :)gpuccio
December 5, 2016
December
12
Dec
5
05
2016
12:08 PM
12
12
08
PM
PDT
GP Very good explanation, again! I missed that part of the definition. A function is not merely the observed sequence and the observer, but it also requires "that which is acted upon by the function". In the case of a random string, the observer could say it's function is to open an imaginary safe. But an imaginary safe is not the proper subject of the scientific analysis. It needs to be a real safe for which that sequence of characters works to open it.Silver Asiatic
December 5, 2016
December
12
Dec
5
05
2016
11:58 AM
11
11
58
AM
PDT
Bob O’H at #42: By the way, please note that, while the definition of the function is made by a conscious observer, there is nothing subjective in the use we do of that definition in the reasoning. Indeed: 1) Any possible function defined by any possible observer for any possible object can be used to measure functional information (with the only restriction explained in my previous post). 2) Whatever the function, it must be defined explicitly and objectively, so that a value of functional information can be objectively measured for that function. 3) Any function correctly and objectively defined, that requires at least 500 bits of functional information to be implemented, can be used to infer design for an object, if the observed object can implement that function.gpuccio
December 5, 2016
December
12
Dec
5
05
2016
10:31 AM
10
10
31
AM
PDT
Bob O'H at #42: So, you decided to join the discussion. That's fine, I appreciate that. You raise am important point, that I have debated in detail in the past. You say: "Both of you agree that the observer chooses the function. But that makes it subjective, so it should be easy to get almost any object (designed or not) above 500 bits. Just define the function tightly enough. Do you have any way of restricting the specification to avoid that?" Let's see. I state again my definition: any observer can define any function. If an object is observed that can implement a function that requires at least 500 bits of information, then we can infer design for the object. As you can see, there is no restriction here. But it is important to understand well the meaning of what I am saying. A function is simply something that you can do with the object. For each explicit definition of a function we can try o measure the functional complexity linked to the function as -log2 of the ratio of the target space to the search space, as explained in my post here: https://uncommondescent.com/intelligent-design/functional-information-defined/ OK, so what is the possible restriction in defining the function? There is only one important rule, which is a natural derivation of the definition. The function must not be "built" on a specific observed sequence. Of course, we can always build a function for a sequence that we have already observed, even if it is a totaly random sequence that in itself cannot be used for anything complex. For example, we can observe a ten digit sequence, obtained in a completely random way, for example: 3744698236 and make it the password for a safe. This is obviously a trick, and it is not a correct definition of a function. The simple rule is: the function must be defined independently from any specific sequence observed. IOWs, we cannot use the information of an existing sequence to define the function, or to generate it (as in the case of the password). We can well use the observed properties of an object to define a function. For example, if we have an observed sequence that works as the password for a safe, we can well define the function: "Any function that works as the password for this safe" In this definition, we are not using any information about any specific sequence: we are only defining what the sequence can do. And we are not using the sequence observed to set a password for the safe. The only case in which we can use a specific sequence to define a function is in the case (scarcely relevant for our discussion) of pre-specification. IOWs, we can use a specific sequence like: 3744698236 and define the function as: any new ten digit sequence that is generated in a random system, and that is identical to the above sequence. In this case, the observed sequence is used to define a function (or to set a password), but it is used only as reference, IOWs, it is not considered an observed sequence generated randomly. Any new sequence that is generated randomly, identical to the reference, will satisfy the search. But in functional specification, the function is usually only a definition of what can be done with some object. Now, if you stick to this simple rule, I state again that what you wrote: "it should be easy to get almost any object (designed or not) above 500 bits." is simply not true. It's not easy. It's simply impossible (with non designed objects). You don't believe me? Try! After all, you said that it is easy. :) Of course, this explanation is rather brief. But I am ready to discuss each single aspect of this issue with you, or with anyone else, in detail. As I have done many times in the past. This is just a first summary.gpuccio
December 5, 2016
December
12
Dec
5
05
2016
10:25 AM
10
10
25
AM
PDT
Bob O'H
I’m not sure why I’d want to do that.
Because you want to explain the origin of protein folds. You observe a widely-recognized biological function. It's far less subjective -- it's a known function. Now you have to see if it can be explained within the design parameter, less than 500 bits. That's why you'd want to do this. Or, you want to see what the "Edge of Evolution" is, don't you?
If something not designed exceeds the design boundary, then you have a problem which will need some fixing. If everything that isn’t designed fails to exceed the boundary then you have a good design detector.
If we took the same approach towards morphological analysis, we would dismiss fossils as evidence. We observe two fossils that look similar. So, "evolution detection" says they're ancestral. Then we look at phylogenic analysis and see they're non-ancestral. So, the fossil observations gave a false positive. Does this invalidate morphological studies? No, because researchers will dismiss ambiguous results. ID cannot rule out design in any or every situation. It has to look for key, or most obvious, markers. Where there is ambiguity, then research cannot proceed. It's the same with ambiguous fossils. They can be fit into any hierarchy, or none. ID doesn't make claims abount non-design either. It can't prove, necessarily, that something is not designed (take a Jackson Pollack painting, or some random drops on a canvas from paint can spills). It only shows indicators where there is positive evidence of design. The focus is much more narrow than what you're demanding -- and you'd have to apply the same standard to fossil analysis otherwise.Silver Asiatic
December 5, 2016
December
12
Dec
5
05
2016
09:29 AM
9
09
29
AM
PDT
SA @ 43 -
Just thinking out loud here, but it’s not a question of getting any function to 500 bits (yes, you could create highly constrained functions), but in getting some functions under 500 bits.
I'm not sure why I'd want to do that. I just want to measure functional specificity, whatever the value. If something not designed exceeds the design boundary, then you have a problem which will need some fixing. If everything that isn't designed fails to exceed the boundary then you have a good design detector. But this needs to be tested, in my opinion.Bob O'H
December 5, 2016
December
12
Dec
5
05
2016
08:42 AM
8
08
42
AM
PDT
1 4 5 6 7 8

Leave a Reply