Uncommon Descent Serving The Intelligent Design Community

Defending Intelligent Design theory: Why targets are real targets, probabilities real probabilities, and the Texas Sharp Shooter fallacy does not apply at all.

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

 

 

 

The aim of this OP is to discuss in some order and with some completeness a few related objections to ID theory which are in a way connected to the argument that goes under the name of Texas Sharp Shooter Fallacy, sometimes used as a criticism of ID.

The argument that the TSS fallacy is a valid objection against ID has been many times presented by DNA_Jock, a very good discussant from the other side. So, I will refer in some detail to his arguments, as I understand them and remember them. Of course, if DNA_Jock thinks that I am misrepresenting his ideas, I am ready to ackowledge any correction about that. He can post here, if he can or likes, or at TSZ, where he is a contributor.

However, I thik that the issues discussed in this OP are of general interest, and that they touch some fundamental aspects of the debate.

As an help to those who read this, I will sum up the general structure of this OP, which will probably be rather long. I will discuss three different arguments, somewhat related. They are:

a) The application of the Texas Sharp Shooter fallacy to ID, and why that application is completely wrong.

b) The objection of the different possible levels of function definition.

c) The objection of the possible alternative solutions, and of the incomplete exploration of the search space.

Of course, the issue debated here is, as usual, the design inference, and in particular its application to biological objects.

So, let’s go.

a) The Texas Sharp Shooter fallacy and its wrong application to ID.

 

What’s the Texas Sharp Shooter fallacy (TSS)?

It is a logical fallacy. I quote here a brief description of the basic metaphor, from RationalWiki:

The fallacy’s name comes from a parable in which a Texan fires his gun at the side of a barn, paints a bullseye around the bullet hole, and claims to be a sharpshooter. Though the shot may have been totally random, he makes it appear as though he has performed a highly non-random act. In normal target practice, the bullseye defines a region of significance, and there’s a low probability of hitting it by firing in a random direction. However, when the region of significance is determined after the event has occurred, any outcome at all can be made to appear spectacularly improbable.

For our purposes, we will use a scenario where specific targets are apparently shot by a shooter. This is the scenario that best resembles what we see in biological objects, where we can observe a great number of functional structures, in particular proteins, and we try to understand the causes of their origin.

In ID, as well known, we use functional information as a measure of the improbability of an outcome.  The general idea is similar to Paley’s argument for a watch: a very high level of specific functional information in an object is a very reliable marker of design.

But to evaluate functional information in any object, we must first define a function, because the measure of functional information depends on the function defined. And the observer must be free to define any possible function, and then measure the linked functional information. Respecting these premises, the idea is that if we observe any object that exhibits complex functional information (for example, more than 500 bits of functional information ) for an explicitly defined function (whatever it is) we can safely infer design.

Now, the objection that we are discussing here is that, according to some people (for example DNA_Jock), by defining the function after we have observed the object as we do in ID theory we are committing the TSS fallacy. I will show why that is not the case using an example, because examples are clearer than abstract words.

So, in our example, we have a shooter, a wall which is the target of the shooting, and the shootin itself. And we are the observers.

We know nothing of the shooter. But we know that a shooting takes place.

Our problem is:

  1. Is the shooting a random shooting? This is the null hypothesis

or:

  1. Is the shooter aiming at something? This is the “aiming” hypothesis

So, here I will use “aiming” instead of design, because my neo-darwinist readers will probably stay more relaxed. But, of course, aiming is a form of design (a conscious representation outputted to a material system).

Now I will describe three different scenarios, and I will deal in detail with the third.

  1. First scenario: no fallacy.

In this case, we can look at the wall before the shooting. We see that there are 100 targets painted in different parts of the wall, rather randomly, with their beautiful colors (let’s say red and white). By the way, the wall is very big, so the targets are really a small part of the whole wall, even if taken together.

Then, we witness the shootin: 100 shots.

We go again to the wall, and we find that all 100 shots have hit the targets, one per target, and just at the center.

Without any worries, we infer aiming.

I will not compute the probabilities here, because we are not really interested in this scenario.

This is a good example of pre-definition of the function (the targets to be hit). I believe that neither DNA_Jock nor any other discussant will have problems here. This is not a TSS fallacy.

  1. Second scenario: the fallacy.

The same setting as above. However, we cannot look at the wall before the shooting. No pre-specification.

After the shooting, we go to the wall and paint a target around each of the different shots, for a total of 100. Then we infer aiming.

Of course, this is exactly the TSS fallacy.

There is a post-hoc definition of the function. Moreover, the function is obviously built (painted) to correspond to the information in the shots (their location). More on this later.

Again, I will not deal in detail with this scenario because I suppose that we all agree: this is an example of TSS fallacy, and the aiming inference is wrong.

  1. Third scenario: no fallacy.

The same setting as above. Again, we cannot look at the wall before the shooting. No pre-specification.

After the shooting, we go to the wall. This time, however, we don’t paint anything.

But we observe that the wall is made of bricks, small bricks. Almost all the bricks are brown. But there are a few that are green. Just a few. And they are randomly distributed in the wall.

 

 

We also observe that all the 100 shots have hit green bricks. No brown brick has been hit.

Then we infer aiming.

Of course, the inference is correct. No TSS fallacy here.

And yet, we are using a post-hoc definition of function: shooting the green bricks.

What’s the difference with the second scenario?

The difference is that the existence of the green bricks is not something we “paint”: it is an objective property of the wall. And, even if we do use something that we observe post-hoc (the fact that only the green bricks have been shot) to recognize the function post-hoc, we are not using in any way the information about the specific location of each shot to define the function. The function is defined objectively and independently from the contingent information about the shots.

IOWs, we are not saying: well the shooter was probably aiming at poin x1 (coordinates of the first shot) and point x2 (coordinates of the second shot), and so on. We just recognize that the shooter was aimin at the green bricks.  An objective property of the wall.

IOWs ( I use many IOWs, because I know that this simple concept will meet a great resistance in the minds of our neo-darwinist friends) we are not “painting” the function, we are simply “recognizing” it, and using that recognition to define it.

Well, this third scenario is a good model of the design inference in ID. It corresponds very well to what we do in ID when we make a design inference for functional proteins. Therefore, the procedure we use in ID is no TSS fallacy. Not at all.

Given the importance of this model for our discussion, I will try to make it more quantitative.

Let’s say that the wall is made of 10,000 bricks in total.

Let’s say that there are only 100 green bricks, randomly distributed in the wall.

Let’s say that all the green bricks have been hit, and no brown brick.

What are the probabilities of that result if the null hypothesis is true (IOWs, if the shooter was not aiming at anything) ?

The probability of one succesful hit (where success means hitting a green brick) is of course 0.01 (100/10000).

The probability of having 100 successes in 100 shots can be computed using the binomial distribution. It is:

10e-200

IOWs, the system exhibits 664 bits of functional information. More ore less like the TRIM62 protein, an E3 ligase discussed in my previous OP about the Ubiquitin system, which exhibits an increase of 681 bits of human conserved functional information at the transition to vertebrates.

Now, let’s stop for a moment for a very important step. I am asking all neo-darwinists who are reading this OP a very simple question:

In the above situation, do you infer aiming?

It’s very important, so I will ask it a second time, a little louder:

In the above situation, do you infer aiming? 

Because if your answer is no, if you still think that the above scenario is a case of TSS fallacy, if you still believe that the observed result is not unlikely, that it is perfectly reasonable under the assumption of a random shooting, then you can stop here: you can stop reading this OP, you can stop discussing ID, at least with me. I will go on with the discussion with the reasonable people who are left.

So, in the end of this section, let’s remind once more the truth about post-hoc definitions:

  1. No post-hoc definition of the function that “paints” the function using the information from the specific details of what is observed is correct. Those definitions are clear examples of TSS fallacy.
  2. On the contrary, any post-hoc definition that simply recognizes a function which is related to an objectively existing property of the system, and makes no special use of the specific details of what is observed to “paint” the function, is perfectly correct. It is not a case of TSS fallacy.

 

b) The objection of the different possible levels of function definition.

DNA_Jock summed up this specific objection in the course of a long discussion in the thread about the English language:

Well, I have yet to see an IDist come up with a post-specification that wasn’t a fallacy. Let’s just say that you have to be really, really, really cautious if you are applying a post-facto specification to an event that you have already observed, and then trying to calculate how unlikely that specific event was. You can make the probability arbitrarily small by making the specification arbitrarily precise.

OK, I have just discussed why post-specifications are not in themselves a fallacy. Let’s say that DNA_Jock apparently admits it, because he just says that we have to be very cautious in applying them. I agree with that, and I have explained what the caution should be about.

Of course, I don’t agree that ID’s post-hoc specifications are a fallacy. They are not, not at all.

And I absolutely don’t agree with his argument that one of the reasosn why ID’s post-hoc specifications are a fallacy would be that “You can make the probability arbitrarily small by making the specification arbitrarily precise.”

Let’s try to understand why.

So, let’s go back to our example 3), the wall with the green bricks and the aiming inference.

Let’s make our shooter a little less precise: let’s say that, out of 100 shots, only 50 hits are green bricks.

Now, the math becomes:

The probability of one succesful hit (where success means hitting a green brick) is still 0.01 (100/10000).

The probability of having 50 successes or more in 100 shots can be computed using the binomial distribution. It is:

6.165016e-72

Now, the system exhibits “only” 236 bits of functional information. Much less than in the previous example, but still more than enough, IMO, to infer aiming.

Consider that five sigma, which is ofetn used as a standard in physics to reject the nulll hypothesis , is just 3×10-7,  less than 22 bits.

Now, DNA_Jock’s objection would be that our post-hoc specification is not valid because “we can make the probability arbitrarily small by making the specification arbitrarily precise”.

But is that true? Of course not.

Let’s say that, in this case, we try to “make the specification arbitrarily more precise”, defining the function of sharp aiming as “hitting only green bricks with all 100 shots”.

Well, we are definitely “making the probability arbitrarily small by making the specification arbitrarily precise”. Indeed, we are making the specification more precise for about 128 orders of magnitude! How smart we are, aren’t we?

But if we do that, what happens?

A very simple thing: the facts that we are observing do not meet the specification anymore!

Because, of  course, the shooter hit only 50 green bricks out of 100. He is smart, but not that smart.

Neither are we smart if we do such a foolish thing, defining a function that is not met by observed facts!

The simple truth is: we cannot at all “make the probability arbitrarily small by making the specification arbitrarily precise”, as DNA_Jock argues, in our post-hoc specification, because otherwise our facts would not meet our specification anymore, and that would be completely useless and irrelevant..

What we can and must do is exactly what is always done in all cases where hypothesis testing is applied in science (and believe me, that happens very often).

We compute the probabilities of observing the effect that we are indeed observing, or a higher one, if we assume the null hypothesis.

That’s why I have said that the probability of “having 50 successes or more in 100 shots” is 6.165016e-72.

This is called a tail probability, in particular the probability of the upper tail. And it’s exactly what is done in science, in most scenarios.

Therefore, DNA_Jock’s argument is completely wrong.

c) The objection of the possible alternative solutions, and of the incomplete exploration of the search space.

c1) The premise

This is certainly the most complex point, because it depends critically on our understanding of protein functional space, which is far from complete.

For the discussion to be in some way complete, I have to present first a very general premise. Neo-darwinists, or at least the best of them, when they understand that they have nothing better to say,  usually desperately recur to a set of arguments related to the functional space of proteins. The reason is simple enough: as the nature and structure of that space is still not well known or understood, it’s easier to equivocate with false reasonings.

Their purpose, in the end, is always to suggest that functional sequences can be much more frequent than we believe. Or at least, that they are much more frequent than IDists believe. Because, if functional sequences are frequent, it’s certainly easier for RV to find them.

The arguments for this imaginary frequency of biological function are essentially of five kinds:

  1. The definition of biological function.
  2. The idea that there are a lot of functional islands.
  3. The idea that functional islands are big.
  4. The idea that functional islands are connected. The extreme form of this argument is that functional islands simply don’t exist.
  5. The idea that the proteins we are observing are only optimized forms that derive from simpler implementations through some naturally selectable ladder of simple steps.

Of course, different mixtures of the above arguments are also frequently used.

OK. let’s get rid of the first, which is rather easy. Of course, if we define extremely simple biological functions, they will be relatively frequent.

For example, the famous Szostak experiment shows that  a weak affinity for ATP is relatively common in a random library; about 1 in 1011 sequences 80 AAs long.

A weak affinity for ATP is certainly a valid definition for a biological function. But it is a function which is at the same time irrelevant and non naturally selectable. Only naturally selectable functions have any interest for the neo-darwinian theory.

Moreover, most biological functions that we observe in proteins are extremely complex. A lot of them have a functional complexity beyond 500 bits.

So, we are only interested in functions in the protein space which are naturally selectable, and we are specially interested in functions that are complex, because those are the ones about which we make a design inference.

The other three points are subtler.

  1. The idea that there are a lot of functional islands.

Of course, we don’t know exactly how many functional islands exist in the protein space, even restricting the concept of function to what was said above. Neo-darwinists hope that there are a lot of them. I think there are many, but not so many.

But the problem, again, is drastically redimensioned if we consider that not all functional islands will do. Going back to point 1, we need naturally selectable islands. And what can be naturally selected is much less than what can potentially be functional. A naturally selectable island of function must be able to give a reproductive advantage. In a system that already has some high complexity, like any living cell, the number of functions that can be immediately integrated in what already exists, is certainly strongly constrained.

This point is also stricly connected to the other two points, so I will go on with them and then try some synthesis.

  1. The idea that functional islands are big.

Of course, functional islands can be of very different sizes. That depends on how many sequences, related at sequence level (IOWs, that are part of the same island), can implement the function.

Measuring functional information in a sequence by conservation, like in the Dustron method or in my procedure many times described, is an indirect way of measuring the size of a functional island. The greater is the functional complexity of an island, the smaller is its size in the search space.

Now, we must remember a few things. Let’s take as an example an extremely conserved but not too long sequence, our friend ubiquitin. It’s 76 AAs long. Therefore, the associated search space is 20^76: 328 bits.

Of course, even the ubiquitin sequence can tolerate some variation, but it is still one of the most conserved sequences in evolutionary history. Let’s say, for simplicity, that at least 70 AAs are stictly conserved, and that 6 can vary freely (of course, that’s not exact, just an approximation for the sake of our discussion).

Therefore, using the absolute information potential of 4.3 bits per aminoacid, we have:

Functional information in the sequence = 303 bits

Size of the functional island = 328 – 303 = 25 bits

Now, a functional island of 25 bits is not exactly small: it corresponds to about 33.5 million sequences.

But it is infinitely tiny if compared to the search space of 328 bits:  7.5 x 10^98 sequences!

If the sequence is longer, the relationship between island space and search space (the ocean where the island is placed) becomes much worse.

The beta chain of ATP synthase (529 AAs), another old friend, exhibits 334 identities between e. coli and humans. Always for the sake of simplicity, let’s consider that about 300 AAs are strictly conserved, and let’s ignore the functional contraint on all the other AA sites. That gives us:

Search space = 20^529 = 2286 bits

Functional information in the sequence = 1297 bits

Size of the functional island =  2286 – 1297 = 989 bits

So, with this computation, there could be about 10^297 sequences that can implement the function of the beta chain of ATP synthase. That seems a huge number (indeed, it’s definitley an overestimate, but I always try to be generous, especially when discussing a very general principle). However, now the functional island is 10^390 times smaller than the ocean, while in the case of ubiquitin it was “just”  10^91 times smaller.

IOWs, the search space (the ocean) increases exponentially much more quickly than the target space (the functional island) as the lenght of the functional sequence increases, provided of course that the sequences always retain high functional information.

The important point is not the absolute size of the island, but its rate to the vastness of the ocean.

So, the beta chain of ATP synthase is really a tiny, tiny island, much smaller than ubiquitin.

Now, what would be a big island? It’s simple: a functional isalnd which can implement the same function at the same level, but with low functional information. The lower the functional information, the bigger the island.

Are there big islands? For simple functions, certainly yes. Behe quotes the antifreeze protein as an example example. It has rather low FI.

But are there big islands for complex functions, like that of ATP synthase beta chain? It’s absolutely reasonable to believe that there are none. Because the function here is very complex, and it cannot be implemented by a simple sequence, exactly like a functional spreadsheet software annot be written by a few bits of source code. Neo-darwinists will say that we don’t know that for certain. It’s true, we don’t know it for certain. We know it almost for certain.

The simple fact remains: the only example of the beta chain of the F1 complex of ATP synthase that we know of is extremely complex.

Let’s go, for the moment, to the 4th argument.

  1. The idea that functional islands are connected. The extreme form of this argument is that functional islands simply don’t exist.

This is easier. We have a lot of evidence that functional islands are not connected, and that they are indeed islands, widely isolated in the search space of possible sequences. I will mention the two best evidences:

4a) All the functional proteins that we know of, those that exist in all the proteomse we have examined, are grouped in abot 2000 superfamilies. By definition, a protein superfamily is a cluster of sequences that have:

  • no sequence similarity
  • no structure similarity
  • no function similarity

with all the other groups.

IOWs, islands in the sequence space.

4b) The best (and probably the only) good paper that relates an experiment where Natural Selection is really tested by an approrpiaite simulation is the rugged landscape paper:

Experimental Rugged Fitness Landscape in Protein Sequence Space

http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0000096

Here, NS is correctly simulated in a phage system, because what is measured is infectivity, which in phages is of course strictly related to fitness.

The function studied is the retrieval of a partially damaged infectivity due to a partial random substitution in a protein linked to infectivity.

In brief, the results show a rugged landscape of protein function, where random variation and NS can rather easily find some low-level peaks of function, while the original wild-type, optimal peak of function cannot realistically be found, not only in the lab simulation, but in any realistic natural setting. I quote from the conclusions:

The question remains regarding how large a population is required to reach the fitness of the wild-type phage. The relative fitness of the wild-type phage, or rather the native D2 domain, is almost equivalent to the global peak of the fitness landscape. By extrapolation, we estimated that adaptive walking requires a library size of 1070 with 35 substitutions to reach comparable fitness.

I would recommend to have a look at Fig. 5 in the paper to have an idea of what a rugged landscape is.

However, I will happily accept a suggestion from DNA_Jock, made in one of his recent comments at TSZ about my Ubiquitin thread, and with which I fully agree. I quote him:

To understand exploration one, we have to rely on in vitro evolution experiments such as Hayashi et al 2006 and Keefe & Szostak, 2001. The former also demonstrates that explorations one and two are quite different. Gpuccio is aware of this: in fact it was he who provided me with the link to Hayashi – see here.
You may have heard of hill-climbing algorithms. Personally, I prefer my landscapes inverted, for the simple reason that, absent a barrier, a population will inexorably roll downhill to greater fitness. So when you ask:

How did it get into this optimized condition which shows a highly specified AA sequence?

I reply
It fell there. And now it is stuck in a crevice that tells you nothing about the surface whence it came. Your design inference is unsupported.

Of course, I don’t agree with the last phrase. But I fully agree that we should think of local optima as “holes”, and not as “peaks”. That is the correct way.

So, the protein landscape is more like a ball and holes game, but without a guiding labyrinth: as long as the ball in on the flat plane (non functional sequences), it can go in any direction, freely. However, when it falls into a hole, it will quickly go to the bottom, and most likely it will remain there.

 

 

But:

  • The holes are rare, and they are of different sizes
  • They are distant from one another
  • A same function can be implemented by different, distant holes, of different size

What does the rugged landscape paper tell us?

  • That the wildtype function that we observe in nature is an extremely small hole. To find it by RV and NS, according to the authors, we should start with a library of 10^70 sequences.
  • That there are other bigger holes which can partially implement some function retrieval, and that are in the range of reasonable RV + NS
  • That those simpler solutions are not bridges to the optimal solution observed in the wildtype. IOWs. they are different, and there is no “ladder” that NS can use to reach the optimal solution .

Indeed, falling into a bigger hole (a much bigger hole, indeed) is rather a severe obstacle to finding the tiny hole of the wildtype. Finding it is already almost impossible because it is so tiny, but it becomes even more impossible if the ball falls into a big hole, because it will be trapped there by NS.

Therefore, to sum up, both the existence of 2000 isolated protein superfamilies and the evidence from the rugged landscape paper demonstrate that functional islands exist, and that they are isolated in the sequence space.

Let’s go now to the 5th argument:

  1. The idea that the proteins we are observing are only optimized forms that derive from simpler implementations by a naturally selectable ladder.

This is derived from the previous argument. If bigger functional holes do exist for a function (IOWs, simpler implementations), and they are definitely easier to find than the optimal solution we observe, why not believe that the simpler solutions were found first, and then opened the way to the optimal solution by a process of gradual optimization and natural selection of the steps? IOWs, a naturally selectable ladder?

And the answer is: because that is impossible, and all the evidence we have is against that idea.

First of all, even if we know that simpler implementations do exist in some cases (see the rugged landscape paper), it is not at all obvious that they exist as a general rule.

Indeed, the rugged landscape experiment is a very special case, because it is about retrieval of a function that has been only partially impaired by substituting a random sequence to part of an already existing, functional protein.

The reason for that is that, if they had completely knocked out the protein, infectivity, and therefore survival itself, would not have survived, and NS could not have acted at all.

In function retrieval cases, where the function is however kept even if at a reduced level, the role of NS is greatly helped: the function is already there, and can be optimed with a few naturally selectable steps.

And that is what happens in the case of the Hayashi paper. But the function is retrieved only very partially, and, as the authors say, there is no reasonable way to find the wildtype sequence, the optimal sequence, in that way. Because the optimal sequence would require, according to the authors, 35 AA substitutions, and a starting library of 10^70 random sequences.

What is equally important is that the holes found in the experiment are not connected to the optimal solution (the wildtype). They are different from it at sequence level.

IOWs, this bigger holes do not lead to the optimal solution. Not at all.

So, we have a strange situation: 2000 protein superfamilies, and thousand and tousands of proteins in them, that appear to be, in most cases, extremely functional, probably absolutely optimal. But we have absolutely no evidence that they have been “optimized”. They are optimal, but not necessarily optimized.

Now, I am not excluding that some optimization can take place in non design systems: we have good examples of that in the few known microevolutionary cases. But that optimization is always extremely short, just a few AAs substitutions once the starting functional island has been found, and the function must already be there.

So, let’s say that if the extremely tiny functional island where our optimal solution lies, for example the wildtype island in the rugged landscape experiment, can be found in some way, then some small optimization inside that functional island could certainly take place.

But first, we have to find that island: and for that we need 35 specific AA substitutions (about 180 bits), and 10^70 starting sequences, if we go by RV + NS. Practically impossible.

But there is more. Do those simpler solutions always exist? I will argue that it is not so in the general case.

For example, in the case of the alpha and beta chains of the F1 subunit of ATP synthase, there is no evidence at all that simpler solutions exist. More on that later.

So, to sum it up:

The ocean of the search space, according to the reasonings of neo-darwinists, should be overflowing with potential naturally selectable functions. This is not true, but let’s assume for a moment, for the sake of discussion, that it is.

But, as we have seen, simpler functions or solutions, when they exist, are much bigger functional islands than the extremely tiny functional islands corresponding to solutions with high functional complexity.

And yet, we have seen that there is absolutely no evidence that simpler solutuion, when they exist, are bridges, or ladder, to highly complex solutions. Indeed, there is good evidence of the contrary.

Given those premises, what would you expect if the neo-darwinian scenario were true? It’s rather simple: an universal proteome overflowing with simple functional solutions.

Instead, what do we observe? It’s rather simple: an universal proteome overflowing with highly functional, probably optimal, solutions.

IOWs, we find in the existing proteome almost exclusively highly complex solutions, and not simple solutions.

The obvious conclusion? The neo-darwinist scenario is false. The highly functional, optimal solutions that we observe can only be the result of intentional and intelligent design.

c2) DNA_Jock’s arguments

Now I will take in more detail DNA_Jock’ s two arguments about alternative solutions and the partial exploration of the protein space, and will explain why they are only variants of what I have already discussed, and therefore not valid.

The first argument, that we can call “the existence of alternative solutions”, can be traced to this statement by DNA_Jock:

Every time an IDist comes along and claims that THIS protein, with THIS degree of constraint, is the ONLY way to achieve [function of interest], subsequent events prove them wrong. OMagain enjoys laughing about “the” bacterial flagellum; John Walker and Praveen Nina laugh about “the” ATPase; Anthony Keefe and Jack Szostak laugh about ATP-binding; now Corneel and I are laughing about ubiquitin ligase: multiple ligases can ubiquinate a given target, therefore the IDist assumption is false. The different ligases that share targets ARE “other peaks”.
This is Texas Sharp Shooter.

We will debate the laugh later. For the moment, let’s see what the argument states.

It says: the solution we are observing is not the only one. There can be others, in some cases we know there are others. Therefore, your computation of probabilities, and therefore of functional inpormation, is wrong.

Another way to put it is to ask the question: “how many needles are there in the haystack?”

Alan Fox seems to prefer this metaphor:

This is what is wrong with “Islands-of-function” arguments. We don’t know how many needles are in the haystack. G Puccio doesn’t know how many needles are in the haystack. Evolution doesn’t need to search exhaustively, just stumble on a useful needle.

They both seem to agree about the “stumbling”. DNA_Jock says:

So when you ask:

How did it get into this optimized condition which shows a highly specified AA sequence?

I reply
It fell there. And now it is stuck in a crevice that tells you nothing about the surface whence it came.

OK, I think the idea is clear enough. It is essentially the same idea as in point 2 of my general premise. There are many functional islands. In particular, in this form, many functional islands for the same function.

I will answer it in two parts:

  • Is it true that the existence of alternative solutions, if they exist, makes the computation of functional complexity wrong?
  • Have we really evidence that alternative solutions exist, and of how frequent they can really be?

I will discuss the first part here, and say something about the second part later in the OP.

Let’s read again the essence of the argument, as summed up by me above:

” The solution we are observing is not the only one. There can be others, in some cases we know there are others. Therefore, your computation of probabilities, and therefore of functional information, is wrong.”

As it happens with smart arguments (and DNA_Jock is usually smart), it contains some truth, but is essentially wrong.

The truth could be stated as follows:

” The solution we are observing is not the only one. There can be others, in some cases we know there are others. Therefore, our computation of probabilities, and therefore of functional information, is not completely precise, but it is essentially correct”.

To see why that is the case, let’s use again a very good metaphor: Paley’s old watch. That will help to clarify my argument, and then I will discuss how it relies to proteins, in particular.

So, we have a watch. Whose function is to measure time. And, in general, let’s assume that we infer design for the watch, because its functional information is high enough to exclude that it could appear in any non design system spontaneously. I am confident that all reasonable people will agree with that. Anyway, we are assuming it for the present discussion.

 

 

Now, after having made a design inference (a perfectly correct inference, I would say) for this object, we have a sudden doubt. We ask ourselves: what if DNA_Jock is right?

So, we wonder: are there other solutions to measure time? Are there other functional islands in the search space of material objects?

Of course there are.

I will just mention four clear examples: a sundial, an hourglass, a digital clock,  an atomic clock.

The sundial uses the position of the sun. The hourglass uses a trickle of sand. The digital clock uses an electronic oscillator that is regulated by a quartz crystal to keep time. An atomic clock uses an electron transition frequency in the microwave, optical, or ultraviolet region.

None of them uses gears or springs.

Now, two important points:

  • Even if the functional complexity of the five above mentioned solutions is probably rather different (the sundial and the hourglass are probably quite simpler, and the atomic clock is probably the most complex), they are all rather complex. None of them would be easily explained without a design inference. IOWs, they are small functional islands, each of them. Some are bigger, some are really tiny, but none of them is big enough to allow a random origin in a non design system.
  • None of the four additional solutions mentioned would be, in any way, a starting point to get to the traditional watch by small functional modifications. Why? Because they are completely different solutions, based on different ideas and plans.

If someone believes differently, he can try to explain in some detail how we can get to a traditional watch starting from an hourglass.

 

 

Now, an important question:

Does the existence of the four mentioned alternative solutions, or maybe of other possible similar solutions, make the design inference for the traditional watch less correct?

The answer, of course, is no.

But why?

It’s simple. Let’s say, just for the sake of discussion, that the traditional watch has a functional complexity of 600 bits. There are at least 4 additional solutions. Let’s say that each of them has, again, a functional complexity of 500 bits.

How much does that change the probability of getting the watch?

The answer is: 2 bits (because we have 4 solutions instead of one). So, now the probability is 598 bits.

But, of course, there can be many more solutions. Let’s say 1000. Now the probability would be about 590 bits. Let’s say one million different complex solutions (this is becoming generous, I would say). 580 bits. One billion? 570 bits.

Shall I go on?

When the search space is really huge, the number of really complex solutions is empirically irrelevant to the design inference. One observed complex solution is more than enough to infer design. Correctly.

We could call this argument: “How many needles do you need to tranfsorm a haystack into a needlestack?” And the answer is: really a lot of them.

Our poor 4 alternative solutions will not do the trick.

But what if there are a number of functional islands that are much bigger, much more likely? Let’s say 50 bits functional islands. Much simpler solutions. Let’s say 4 of them. That would make the scenario more credible. Not so much, probably, but certainly it would work better than the 4 complex solutions.

OK, I have already discussed that above, but let’s say it again. Let’s say that you have 4 (or more) 50 bits solution, and one (or more) 500 bits solution. But what you observe as a fact is the 500 bits solution, and none of the 50 bits solutions. Is that credible?

No, it isn’t. Do you know how smaller a 500 bits solution is if compared to a 50 bits solution? It’s 2^450 times smaller: 10^135 times smaller. We are dealing with exponential values here.

So, if much simpler solutions existed, we would expect to observe one of them, and not certainly a solution that is 10^135 times more unlikely. The design inference for the highly complex solution is not disturbed in any way by the existence of much simpler solutions.

OK, I think that the idea is clear enough.

c3) The laughs

As already mentioned, the issue of alternative solutions and uncounted needles seems to be a special source of hilarity for DNA_Jock.  Good for him (a laugh is always a good thing for physical and mental health). But are the laughs justified?

I quote here again his comment about the laughs, that I will use to analyze the issues.

Every time an IDist comes along and claims that THIS protein, with THIS degree of constraint, is the ONLY way to achieve [function of interest], subsequent events prove them wrong. OMagain enjoys laughing about “the” bacterial flagellum; John Walker and Praveen Nina laugh about “the” ATPase; Anthony Keefe and Jack Szostak laugh about ATP-binding; now Corneel and I are laughing about ubiquitin ligase: multiple ligases can ubiquinate a given target, therefore the IDist assumption is false. The different ligases that share targets ARE “other peaks”.

I will not consider the bacterial flagellum, that has no direct relevance to the discussion here. I will analyze, instead, the other three laughable issues:

  • Szostak and Keefe’s ATP binding protein
  • ATP synthase (rather than ATPase)
  • E3 ligases

Szostak and Keefe should not laugh at all, if they ever did. I have already discussed their paper a lot of times. It’s a paper about directed evolution which generates a strongly ATP binding protein form a weakly ATP binding protein present in a random library. It is directed evolution by mutation and artificial selection. The important point is that both the original weakly binding protein and the final strongly binding protein are not naturally selectable.

Indeed, a protein that just binds ATP is of course of no utility in a cellular context. Evidence of this obvious fact can be found here:

A Man-Made ATP-Binding Protein Evolved Independent of Nature Causes Abnormal Growth in Bacterial Cells

http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0007385

There is nothing to laugh about here: the protein is a designed protein, and anyway it is no functional peak/hole at all in the sequence space, because it cannot be naturally selected.

Let’s go to ATP synthase.

DNA_Jock had already remarked:

They make a second error (as Entropy noted) when they fail to consider non-traditional ATPases (Nina et al).

And he gives the following link:

Highly Divergent Mitochondrial ATP Synthase Complexes in Tetrahymena thermophila

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2903591/

And, of course, he laughs with Nina (supposedly).

OK. I have already discussed that the existence of one or more highly functional, but different, solutions to ATP building would not change the ID inference at all. But is it really true that there are these other solutions?

Yes and no.

As far as my personal argument is concerned, the answer is definitely no (or at least, there is no evidence of them). Why?

Because my argument, repeated for years, has always been based (everyone can check) on the alpha and beta chains of ATP synthase, the main constituents of the F1 subunit, where the true catalytic function is implemented.

To be clear, ATP synthase is a very complex molecule, made of many different chains and of two main multiprotein subunits. I have always discussed only the alpha and beta chains, because those are the chains that are really highly conserved, from prokaryotes to humans.

The other chains are rather conserved too, but much less. So, I have never used them for my argument. I have never presented blast values regarding the other chains, or made any inference about them. This can be checked by everyone.

Now, the Nina paper is about a different solution for ATP synthase that can be found in some single celled eukaryotes,

I quote here the first part of the abstract:

The F-type ATP synthase complex is a rotary nano-motor driven by proton motive force to synthesize ATP. Its F1 sector catalyzes ATP synthesis, whereas the Fo sector conducts the protons and provides a stator for the rotary action of the complex. Components of both F1 and Fo sectors are highly conserved across prokaryotes and eukaryotes. Therefore, it was a surprise that genes encoding the a and b subunits as well as other components of the Fo sector were undetectable in the sequenced genomes of a variety of apicomplexan parasites. While the parasitic existence of these organisms could explain the apparent incomplete nature of ATP synthase in Apicomplexa, genes for these essential components were absent even in Tetrahymena thermophila, a free-living ciliate belonging to a sister clade of Apicomplexa, which demonstrates robust oxidative phosphorylation. This observation raises the possibility that the entire clade of Alveolata may have invented novel means to operate ATP synthase complexes.

Emphasis mine.

As everyone can see, it is absolutely true that these protists have a different, alternative form of ATP symthase: it is based on a similar, but certainly divergent, architecture, and it uses some completely different chains. Which is certainly very interesting.

But this difference does not involve the sequence of the alpha and beta chains in the F1 subunit.

Beware, the a and b subunits mentioned above by the paper are not the alpha and beta chains.

From the paper:

The results revealed that Spot 1, and to a lesser extent, spot 3 contained conventional ATP synthase subunits including α, β, γ, OSCP, and c (ATP9)

IOWs, the “different” ATP synthase uses the same “conventional” forms of alpha and beta chain.

To be sure of that, I have, as usual, blasted them against the human forms. Here are the results:

ATP synthase subunit alpha, Tetrahymena thermophila, (546 AAs) Uniprot Q24HY8, vs  ATP synthase subunit alpha, Homo sapiens, 553 AAs (P25705)

Bitscore: 558 bits     Identities: 285    Positives: 371

ATP synthase subunit beta, Tetrahymena thermophila, (497 AAs) Uniprot I7LZV1, vs  ATP synthase subunit beta, Homo sapiens, 529 AAs (P06576)

Bitscore: 729 bits     Identities: 357     Positives: 408

These are the same, old, conventional sequences that we find in all organisms, the only sequences that I have ever used for my argument.

Therefore, for these two fundamental sequences, we have no evidence at all of any alternative peaks/holes. Which, if they existed, would however be irrelevant, as already discussed.

Not much to laugh about.

Finally, E3 ligases. DNA_Jock is ready to laugh about them because of this very good paper:

Systematic approaches to identify E3 ligase substrates

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5103871/

His idea, shared with other TSZ guys, is that the paper demonstrates that E3 ligases are not specific proteins, because a same substrate can bind to more than one E3 ligase.

The paper says:

Significant degrees of redundancy and multiplicity. Any particular substrate may be targeted by multiple E3 ligases at different sites, and a single E3 ligase may target multiple substrates under different conditions or in different cellular compartments. This drives a huge diversity in spatial and temporal control of ubiquitylation (reviewed by ref. [61]). Cellular context is an important consideration, as substrate–ligase pairs identified by biochemical methods may not be expressed or interact in the same sub-cellular compartment.

I have already commented elsewhere (in the Ubiquitin thread) that the fact that a substrate can be targeted by multiple E3 ligases at different sites, or in different sub-cellular compartments, is  clear evidence of complex specificity. IOWs, its’ not that two or more E3 ligases bind a same target just to do the same thing, they bind the same target in different ways and different context to do different things. The paper, even if very interesting, is only about detecting affinities, not function.

That should be enough to stop the laughs. However, I will add another simple concept. If E3 ligases were really redundant in the sense suggested by DNA_Jock and friends, their loss of function should not be a serious problem for us. OK, I will just quote a few papers (not many, because this OP is already long enough):

The multifaceted role of the E3 ubiquitin ligase HOIL-1: beyond linear ubiquitination.

https://www.ncbi.nlm.nih.gov/pubmed/26085217

HOIL-1 has been linked with antiviral signaling, iron and xenobiotic metabolism, cell death, and cancer. HOIL-1 deficiency in humans leads to myopathy, amylopectinosis, auto-inflammation, and immunodeficiency associated with an increased frequency of bacterial infections.

WWP1: a versatile ubiquitin E3 ligase in signaling and diseases.

https://www.ncbi.nlm.nih.gov/pubmed/22051607

WWP1 has been implicated in several diseases, such as cancers, infectious diseases, neurological diseases, and aging.

RING domain E3 ubiquitin ligases.

https://www.ncbi.nlm.nih.gov/pubmed/19489725

RING-based E3s are specified by over 600 human genes, surpassing the 518 protein kinase genes. Accordingly, RING E3s have been linked to the control of many cellular processes and to multiple human diseases. Despite their critical importance, our knowledge of the physiological partners, biological functions, substrates, and mechanism of action for most RING E3s remains at a rudimentary stage.

HECT-type E3 ubiquitin ligases in nerve cell development and synapse physiology.

https://www.ncbi.nlm.nih.gov/pubmed/25979171

The development of neurons is precisely controlled. Nerve cells are born from progenitor cells, migrate to their future target sites, extend dendrites and an axon to form synapses, and thus establish neural networks. All these processes are governed by multiple intracellular signaling cascades, among which ubiquitylation has emerged as a potent regulatory principle that determines protein function and turnover. Dysfunctions of E3 ubiquitin ligases or aberrant ubiquitin signaling contribute to a variety of brain disorders like X-linked mental retardation, schizophrenia, autism or Parkinson’s disease. In this review, we summarize recent findings about molecular pathways that involve E3 ligasesof the Homologous to E6-AP C-terminus (HECT) family and that control neuritogenesis, neuronal polarity formation, and synaptic transmission.

Finally I would highly recommend the following recent paper to all who want to approach seriously the problem of specificity in the ubiquitin system:

Specificity and disease in the ubiquitin system

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5264512/

Abstract

Post-translational modification (PTM) of proteins by ubiquitination is an essential cellular regulatory process. Such regulation drives the cell cycle and cell division, signalling and secretory pathways, DNA replication and repair processes and protein quality control and degradation pathways. A huge range of ubiquitin signals can be generated depending on the specificity and catalytic activity of the enzymes required for attachment of ubiquitin to a given target. As a consequence of its importance to eukaryotic life, dysfunction in the ubiquitin system leads to many disease states, including cancers and neurodegeneration. This review takes a retrospective look at our progress in understanding the molecular mechanisms that govern the specificity of ubiquitin conjugation.

Concluding remarks

Our studies show that achieving specificity within a given pathway can be established by specific interactions between the enzymatic components of the conjugation machinery, as seen in the exclusive FANCL–Ube2T interaction. By contrast, where a broad spectrum of modifications is required, this can be achieved through association of the conjugation machinery with the common denominator, ubiquitin, as seen in the case of Parkin. There are many outstanding questions to understanding the mechanisms governing substrate selection and lysine targeting. Importantly, we do not yet understand what makes a particular lysine and/or a particular substrate a good target for ubiquitination. Subunits and co-activators of the APC/C multi-subunit E3 ligase complex recognize short, conserved motifs (D [221] and KEN [222] boxes) on substrates leading to their ubiquitination [223–225]. Interactions between the RING and E2 subunits reduce the available radius for substrate lysines in the case of a disordered substrate [226]. Rbx1, a RING protein integral to cullin-RING ligases, supports neddylation of Cullin-1 via a substrate-driven optimization of the catalytic machinery [227], whereas in the case of HECT E3 ligases, conformational changes within the E3 itself determine lysine selection [97]. However, when it comes to specific targets such as FANCI and FANCD2, how the essential lysine is targeted is unclear. Does this specificity rely on interactions between FA proteins? Are there inhibitory interactions that prevent modification of nearby lysines? One notable absence in our understanding of ubiquitin signalling is a ‘consensus’ ubiquitination motif. Large-scale proteomic analyses of ubiquitination sites have revealed the extent of this challenge, with seemingly no lysine discrimination at the primary sequence level in the case of the CRLs [228]. Furthermore, the apparent promiscuity of Parkin suggests the possibility that ubiquitinated proteins are the primary target of Parkin activity. It is likely that multiple structures of specific and promiscuous ligases in action will be required to understand substrate specificity in full.

To conclude, a few words about the issue of the sequence space not entirely traversed.

We have 2000  protein superfamilies that are completely unrelated at sequence level. That is  evidence that functional protein sequences are not bound to any particular region of the sequence space.

Moreover, neutral variation in non coding and non functional sequences can go any direction, without any specific functional constraints. I suppose that neo-darwinists would recognize that parts of the genomes is non functional, wouldn’t they? And we have already seen elsewhere (in the ubiquitin thread discussion) that many new genes arise from non coding sequences.

So, there is no reason to believe that the functional space has not been traversed. But, of course, neutral variation can traverse it only at very low resolution.

IOWs, there is no reason that any specific part of the sequence space is hidden from RV. But of course, the low probabilistic resources of RV can only traverse different parts of the sequence space occasionally.

It’s like having a few balls that can move freely on a plane, and occasionally fall into a hole. If the balls are really few and the plane is extremely big, the balls will be able to  potentially traverse all the regions of the plane, but they will pass only through a very limited number of possible trajectories. That’s why finding a very small hole will be almost impossible, wherever it is. And there is no reason to believe that small functional holes are not scattered in the sequence space, as protein superfamilies clearly show.

So, it’s not true that highly functional proteins are hidden in some unexplored tresure trove in the sequence space. They are there for anyone to find them, in different and distant parts of the sequence space, but it is almost impossible to find them through a random walk, because they are so small.

And yet, 2000 highly functional superfamilies are there.

Moreover, The rate of appearance of new suprefamilies is highest at the beginning of natural history (for example in LUCA), when a smaller part of the sequence space is likely to have been traversed, and decreases constantly, becoming extremely low in the last hundreds of million years. That’s not what you would expect if the problem of finding new functional islands were due to how much sequence space has been traversed, and if the sequence space were really so overflowing with potential naturally selectable functions, as neo-darwinists like to believe.

OK, that’s enough. As expected, this OP is very long. However, I think that it  was important to discuss all these partially related issues in the same context.

 

Comments
DATCG at #326: Thank you for the kind words, and for mentioning in detail some of the basic ideas clearly expressed by Abel and Trevors. I find their basic concepts really helpful. For example, the concept of configurable switches, and the clear distinction between descriptive information and prescriptive information have helped me a lot, and I use those concepts very often in my discussions. You say: "We are Coded Beings, not crystals, not snowflakes." That sums it up nicely! :)gpuccio
May 1, 2018
May
05
May
1
01
2018
11:24 PM
11
11
24
PM
PDT
Nonlin.org: Indeed, I don't want to persuade anyone. I just express my ideas. I believe that ID theory is true, and I try to explain why. Biological ID is about biological issues, so I am afraid that some specifical biological understanding is required. And I don't want to discriminate Paley: he is certainly a great guy! My point was only that his language and approach are those of a philosopher writing more than 200 years ago, and therefore they must be understood in that context. But I do believe that his metaphor about the watch remains a precious idea.gpuccio
May 1, 2018
May
05
May
1
01
2018
11:19 PM
11
11
19
PM
PDT
Gpuccio@332 Ok, so now I understand your argument much better than before. We are on the same side, so I see no good "evolutionary" counterarguments. However, if I were neutral I would say that something seems to be missing and that your ideas are way too convoluted to be persuasive. And that is a big problem for a lot of the ID books out there. Are you publishing anywhere else? Writing a book? Because if you do, you need to do a much, much better job summarizing your argument and your defense against the Darwinist attacks. And you also need to write for the common person, not just for people that spent all their life in the biology lab. I hope this helps. Regarding your new comment @336, If you discriminate against Paley, why listen to Newton, Leibniz or Pythagoras for that matter? Newton has been overruled already in some areas. Not Paley (not yet)!Nonlin.org
May 1, 2018
May
05
May
1
01
2018
06:10 PM
6
06
10
PM
PDT
Origenes: Frankly, I am not interested in an exegesis of Paley's writing. He had a great intuition, but he is always a philosopher of the eighteen century, and his language is consistent with that. Our friend Nonlin.org seems happy to consider design as some form of law. After all, as you say, he is probably the only one who believes that way. I don't think he is damaging anyone by believing what he believes. And after all, he has faced a fair discussion here, and we must commend him for his honesty, if not for his clarity of thought! :)gpuccio
May 1, 2018
May
05
May
1
01
2018
03:33 PM
3
03
33
PM
PDT
Nonlin.org: "Looks like this is the end of this road as we’ll not reach a common understanding. That’s OK. At least our positions are clear." That's fine with me! :)gpuccio
May 1, 2018
May
05
May
1
01
2018
03:26 PM
3
03
26
PM
PDT
Nonlin: “Everyone” is misusing the word “law”. But W. Paley got the right idea in AD 1800 ...
Nonlin misunderstands what Paley is saying. He erroneously believes that Paley says that an agent is a "lawmaker."
Nonline: “What you miss is that I create the laws of that gizmo. I am the lawmaker. My design are those laws, not some “configuration”.
But that is not at all what Paley is saying. Let's have a look:
Paley: And not less surprised to be informed, that the watch in his hand was nothing more than the result of the laws of metallic nature. It is a perversion of language to assign any law, as the efficient, operative cause of any thing. A law presupposes an agent; for it is only the mode, according to which an agent proceeds: it implies a power; for it is the order, according to which that power acts. Without this agent, without this power, which are both distinct from itself, the law does nothing; is nothing.
The law, says Paley, prescribes how the agent must proceed. The law sets the boundaries — what is possible and what is not — for an intelligent designer. The laws bring "order", like chess rules set boundaries for the chess player. Paley also claims that laws are inert on their own. Only when an agent wields his power do laws spring into action. But nowhere does Paley say that an agent makes laws or that "design is laws." That is all in Nonlin's imagination.Origenes
May 1, 2018
May
05
May
1
01
2018
02:28 PM
2
02
28
PM
PDT
Gpuccio@331 Fair or unfair, a coin becomes part of the design when you (the intelligent agent) start using it (you probably disagree and that’s OK). You: “Here you are using the word “law” in a completely personal way, which does not correspond to what everyone means by a law of nature.” Yes! That’s the point and our standing disagreement! “Everyone” is misusing the word “law”. But W. Paley got the right idea in AD 1800: “It is a perversion of language to assign any law as the efficient, operative cause of anything. A law presupposes an agent; for it is only the mode according to which an agent proceeds; it implies a power; for it is the order, according to which that power acts. Without this agent, without this power, which are both distinct from itself, the law does nothing; is nothing.” I also say there’s no such thing as “universal natural laws”. And this is of course part of our disagreement. Looks like this is the end of this road as we’ll not reach a common understanding. That’s OK. At least our positions are clear.Nonlin.org
May 1, 2018
May
05
May
1
01
2018
01:56 PM
1
01
56
PM
PDT
Nonlin.org at #329:
1.Isn’t your formula random? It could be –ln or –log10 or even straight Target space/Search space
-log2 is the formula which is used in information theory, including Shannon. It is useful, because ot gives you the results in bits, the common unit for information. The target space/search space ratio is a probability. To transform it into a measure of information, you have to use that formule. So you have positive bits instead of negative exponents. It's only a question of mathematical usefulness. Randomness has nothing to do with it.
2.The stone example doesn’t work for many reasons. Can you select and go through a real world accessible biology example instead?
What about the alpha and beta chains of ATP synthase?
3.How do you know target space? What is a “good stone”?
Target space is simply the sum of all objects in a system that can implement the function as observed and defined. Measuring it is the most difficult part, and is not always possible. For complex functions and big search spaces, the target space cannot be measured directly, becasue of the combinatorial barriers. But there are indirect methods to approximate it. In the case of functional proteins, I use, as explained many times, the indirect measure derived from sequence conservation through very long evolutionary times. Again, look at my arguments about the alpha and beta chains of ATP synthase. You can find a real evaluation of search space and target space in the case of English language in this OP of mine: An attempt at computing dFSCI for English language https://uncommondescent.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/
4.How do you know “search space”? Is it “this area”? The whole continent? The world? The universe?
Not at all. The search space is the set of all possible objects in the system that could reasonably be generated in the system, and that includes those which can implement the observed function (the target space). In the case of a functional protein, the best choice procedurally is to define the search space as the set of all possible AA sequences of that length: it is of course an approximation, but a very reasonable one.
5.For “complex” you say 30 bits and 500 bits somewhere else. But why? And both seem arbitrary. And what does that mean “30 bits”? Is it “complex if –log2(Target space/Search) > 2^30 (=1 billion)”?
Where have I said 30 bits? I don't understand. I always say that we must give a threshold which is appropriate for the system we are describing. The purpose of the threshod is to make the observed result really unlikely even after considering the probabilistic resources of the system. 500 bits is a good threshold in the general case, because it is big enough ot make any result highly unlikely, even considering all the probabilistic resources of the known universe (Dembski's Universal Probability Bound). For biological object on our planet, a lower threshold is more than enough. See my table at the beginning of my OP: What are the limits of Random Variation? A simple evaluation of the probabilistic resources of our biological world https://uncommondescent.com/intelligent-design/what-are-the-limits-of-random-variation-a-simple-evaluation-of-the-probabilistic-resources-of-our-biological-world/ where I compute the probabilistic resources of the whole bacterial system on our planet as amounting (very, very generously) at 138.6 bits, at most. 30 bits is definitely too low to make a safe design inference for planetary scenarios. I do belive that 30 bits functions are probably designed in all cases, because the empirical observed threshold is probably somehwere around 4 - 5 AAs (about 20 bits). But again, I would not make a case for such simpler situations. We have, definitley, a lot of examples of hundreds and thousands of bits even if we only consider proteins.
6.What if “the function” can be accomplished without stones? What if it can be done with “bones” or “twigs” instead? What if “the function” can be broken into “simple functions”?
What if is not a good way of making science. We must reason about observed facts, and specific systems. In biology, many functions are implemented by proteins, and only by proteins. Therefore, we observe functional proteins, and the protein search space. That's empirical science. Maybe we could build some ATP synthase using Lego bricks, who knows, but I would not spend my nights reasoning about about those possibilities. The question: "What if “the function” can be broken into “simple functions”?" is, of course, more interesting. If that is true, we have a ladder of functions. To be useful in biology, it must be a ladder of naturally selectable functions. But, of course, that is not true. A complex function is complex because it is not the simple sum of simpler functions. Of course there can be modules in complex functions, but the idea is that afunction is complex if it requires more than 500 specific bits that did not exist before to appear. It is not 9important if, beyond those 500 specific and new and original bits, it also uses old modules that already existed. So, a petrol car certainly uses wheels, like a cart, but the petrol engine was not present in the cart: it is a new, original function.gpuccio
May 1, 2018
May
05
May
1
01
2018
09:42 AM
9
09
42
AM
PDT
Nonlin.org at #328: 1) I don't want to discuss Paley's language, of course. It does not correspond to what we use today, even if the ideas remain the same. An unfair coin can be designed or not. It can be unfair by chance, a production defect, or because someone uses it to win (design). We cannot distinguish the two conditions, because an unfair coin is a rather simple object. Therefore, both design systems and non design system can produce it (of course, a coin in itself is more complex: but, given coins, it is not so difficult that some of them can be unfair). My point was not to infer design for the coin, but ofr a sequence of all heads. It is an oredered sequence, but is it was produced with an unfair coin, no specific design intervention was necessary to generate the order. The laws of nature can generate some order in many cases, but they can never generate a contingent configuration with high functional specificity. 2) Cell division is certainly not a law of nature. It is a complex process, made possible by an extremely highly complex configuration of the structures implied. You must be very careful not to do such huge errors of category. 3) You say: "What you miss is that I create the laws of that gizmo. I am the lawmaker. My design are those laws, not some “configuration”. The configuration is just the way I get my gizmo to implement my laws. And if you change my laws, so be it, but you can only do that because you are a designer too. You’re a lawmaker too." No. Here you are using the word "law" in a completely personal way, which does not correspond to what everyone means by a law of nature. You can use it that way, if you like, but you cannot conflate the two meanings. If I design a machine, the cpecific contingent configuration of the machine can implement my desired function, and of course the machine works according to the laws of nature, and according to the configuration of the machine itself, which uses those natural laws to achieve a specific result. If you want to call the function, or the functional configuration, a law, you are only playing with words. Of course the designer establishes the form of the machine, and how it will work, but that is not a law at all. So, I say that a designer is a design-maker, not a law-maker. You language is confusiong. You go on using the word "law-maker" for "designer. Again, you are just palying with words. The point is, a designer works with special configurations that use the universal natural laws in a specific way to attain desired results. The point is in the configuration, both in the case of machines and in the case of art. The "regularities" you speak of for a tablet, ot any other human artifact, are not regularities that could emerge by natural laws. Tablets do not emerge by natural laws, even if we do not consider their computing functions. Neither do spoons or forks. Of course a designer implements specific forms (configurations) from his presonal conscious representations to objects. Sum are simpler and more "regular", other are more complex and contingent, but in all cases a configuration that would never arise spontabeously by law is intenitonally generated by the designer. Even the things that you call "regularities" in designed tools are simply "configurable switches", and not the order that derives from natural laws. I stick to digital information, rather than to analogic cases, because it's much easier to compute the functional information, and because most biological scenarios are about digital information. But the general concept is the same in all cases.gpuccio
May 1, 2018
May
05
May
1
01
2018
09:12 AM
9
09
12
AM
PDT
...and of course 2^30 = 1 billion not 1 million.Nonlin.org
April 30, 2018
April
04
Apr
30
30
2018
04:06 PM
4
04
06
PM
PDT
gpuccio@322 Here are a few more questions regarding your “complex functional information”: 1. Isn’t your formula random? It could be –ln or –log10 or even straight Target space/Search space 2. The stone example doesn’t work for many reasons. Can you select and go through a real world accessible biology example instead? 3. How do you know target space? What is a “good stone”? 4. How do you know “search space”? Is it “this area”? The whole continent? The world? The universe? 5. For “complex” you say 30 bits and 500 bits somewhere else. But why? And both seem arbitrary. And what does that mean “30 bits”? Is it “complex if –log2(Target space/Search) > 2^30 (=1 million)”? 6. What if “the function” can be accomplished without stones? What if it can be done with “bones” or “twigs” instead? What if “the function” can be broken into “simple functions”? 7. Any other questions that you had to answer?Nonlin.org
April 30, 2018
April
04
Apr
30
30
2018
02:08 PM
2
02
08
PM
PDT
gpuccio@322 I have yet to formalize my ideas in a coherent essay - it is on my “to do” list and this discussion with you is helping a lot. Thanks! If you don’t mind, here are a few more clarifying questions/comments: You say: “we must be extremely careful that order is not simply the result of law (like in the case of an unfair coin which gives a series of heads). Function, instead, when implemented by a specific contingent configuration, has no such limitations.” I just got a hold of Paley’s book and what do you know, right on page 8 he cautions us against assuming this and that law as given: https://babel.hathitrust.org/cgi/pt?id=mdp.39015005472033;view=1up;seq=20;size=75 (and this is as far as I got). And is the unfair coin you mention not a perfect example of a design indistinguishable from law? Because someone created that unfair coin, right? And don’t they say your function (say cell division) is really just a law of nature? On “determinism” I just read the internet definition and search hits – not a big deal to me but definitely confusing. You say: “The design, again, is certainly based on understanding of laws, and operates using laws: the light turn on powered by solar energy because you arranged things for that to happen. It’s the configuration that counts, and the configuration is there because you designed it. Gizmos don’t go in orbit with solar cells and all the rest because some law makes that happen spontaneously. Moreover, I could reach your gizmo and change the design in it. I could arrange things so that the light goes on only when the moon is visible. And the gizmo would go on that other way, after my explicit design intervention.” What you miss is that I create the laws of that gizmo. I am the lawmaker. My design are those laws, not some “configuration”. The configuration is just the way I get my gizmo to implement my laws. And if you change my laws, so be it, but you can only do that because you are a designer too. You’re a lawmaker too. Yes, I am conflating design with laws because that’s what design is – lawmaking! And you can also see this in functionless art that nonetheless clearly shows me to be Rembrandt the lawmaker, not Leonardo the lawmaker. And if you are Paley in 1800 and find my Samsung tablet, you can clearly see it is designed without observing any function other than paperweight (because you don’t have electricity and know nothing about modern technology). This is especially relevant to biology where we still can’t identify many functions. By my method, he will know the tablet was designed even though by yours he won’t (“false negative”). And how do we search for extraterrestrial life? Function or no function, when we’ll see the object’s regularities (its laws) we’ll know it was designed. We’ll see how my arguments fare when I publish. I am not necessarily disputing the “ID theory” - just looking for something more convincing and simpler. Darwinistas invoking NS is simply retard as the whole idea of NS is bogus: http://nonlin.org/natural-selection/Nonlin.org
April 30, 2018
April
04
Apr
30
30
2018
11:48 AM
11
11
48
AM
PDT
DATCG:
Gpuccio, Your patience is admirable.
Seconded.LocalMinimum
April 30, 2018
April
04
Apr
30
30
2018
07:56 AM
7
07
56
AM
PDT
Gpuccio, Your patience is admirable. For readers that happen by and see this post and others, they will get a very clear picture of what is wrong with Determinism and Laws as an only answer for life, as well as the lack of reasoning and assumptions for unguided, blind events being the reason for life. Simply not true on multiple levels of Code. The problem is to many equate Simplified Order with Function. But Function relies on Specified Organization and semiotic language - Code - involved with arbitrary assignments of variables called by functional systems that interact and communicate with each other. You cannot have Error Correction based on Law alone. The Rules, instructions and interpretations are not made by law alone. This is a semiotic system. Gpuccio, I'll go back to Three Subsets of Information. Hope that's OK to add here after so much work on your behalf. And for others, so readers can see another well written explanation of these concepts for Functional Sequence Complexity and Prescriptive Information. Not merely law and Order sequencing, but by Organization and Design. So readers can understand the differences and limitations of Random and Ordered Sequences to produce life... Three Subsets of sequence complexity and their relevance to biopolymeric Information https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1208958/ David Abel and Jack Trevors 2005
In life-origin science, attention usually focuses on a theorized pre-RNA World [52-55]. RNA chemistry is extremely challenging in a prebiotic context. Ribonucleotides are difficult to activate (charge). And even oligoribonucleotides are extremely hard to form, especially without templating. The maximum length of such single strands in solution is usually only eight to ten monomers (mers). As a result, many investigators suspect that some chemical RNA analog must have existed [56,57]. For our purposes here of discussing linear sequence complexity, let us assume adequate availability of all four ribonucleotides in a pre-RNA prebiotic molecular evolutionary environment. Any one of the four ribonucleotides could be polymerized next in solution onto a forming single-stranded polyribonucleotide. Let us also ignore in our model for the moment that the maximum achievable length of aqueous polyribonucleotides seems to be no more than eight to ten monomers (mers). Physicochemical dynamics do not determine the particular sequencing of these single-stranded, untemplated polymers of RNA. The selection of the initial "sense" sequence is largely free of natural law influences and constraints. Sequencing is dynamically inert[58]. Even when activated analogs of ribonucleotide monomers are used in eutectic ice, incorporation of both purine and pyrimidine bases proceed at comparable rates and yields [59]. Monnard's paper provides additional evidence that the sequencing of untemplated single-stranded RNA polymerization in solution is dynamically inert – that the sequencing is not determined or ordered by physicochemical forces. Sequencing would be statistically unweighted given a highly theoretical "soup" environment characterized by 1) equal availability of all four bases, and 2) the absence of complementary base-pairing and templating (e.g., adsorption onto montmorillonite). Initial sequencing of single-stranded RNA-like analogs is crucial to most life-origin models. Particular sequencing leads not only to a theorized self- or mutually-replicative primary structure, but to catalytic capability of that same or very closely-related sequence. One of the biggest problems for the pre-RNA World model is finding sequences that can simultaneously self-replicate and catalyze needed metabolic functions. For even the simplest protometabolic function to arise, large numbers of such self-replicative and metabolically contributive oligoribonucleotides would have to arise at the same place at the same time. Little empirical evidence exists to contradict the contention that untemplated sequencing is dynamically inert (physically arbitrary). We are accustomed to thinking in terms of base-pairing complementarity determining sequencing. It is only in researching the pre-RNA world that the problem of single-stranded metabolically functional sequencing of ribonucleotides (or their analogs) becomes acute. And of course highly-ordered templated sequencing of RNA strands on natural surfaces such as clay offers no explanation for biofunctional sequencing. The question is never answered, "From what source did the template derive its functional information?" In fact, no empirical evidence has been presented of a naturally occurring inorganic template that contains anything more than combinatorial uncertainty. No bridge has been established between combinatorial uncertainty and utility of any kind. It is difficult to polymerize even activated ribonucleotides without templating. Eight to ten mers is still the maximum oligoribonucleotide length achievable in solution. When we appeal to templating as a means of determining sequencing, such as adsorption onto montmorillonite, physicochemical determinism yields highly ordered sequencing (e.g., polyadenines)[60]. Such highly-ordered, low-uncertainty sequences retain almost no prescriptive information. Empirical and rational evidence is lacking of physics or chemistry determining semantic/semiotic/biomessenger functional sequencing. Increased frequencies of certain ribonucleotides, CG for example, are seen in post-textual reference sequences. This is like citing an increased frequency of "qu" in post-textual English language. The only reason "q" and "u" have a higher frequency of association in English is because of arbitrarily chosen rules, not laws, of the English language. Apart from linguistic rules, all twenty-six English letters are equally available for selection at any sequential decision node. But we are attempting to model a purely pre-textual, combinatorial, chemical-dynamic theoretical primordial soup. No evidence exists that such a soup ever existed. But assuming that all four ribonucleotides might have been equally available in such a soup, no such "qu" type rule-based linkages would have occurred chemically between ribonucleotides. They are freely resortable apart from templating and complementary binding. Weighted means of each base polymerization would not have deviated far from p = 0.25. When we introduce ribonucleotide availability realities into our soup model, we would not expect hardly any cytosine to be incorporated into the early genetic code. Cytosine is extremely difficult even for highly skilled chemists to generate [61,62]. If an extreme paucity of cytosine existed in a primordial environment, uncertainty would have been greatly reduced. Heavily weighted means of relative occurrence of the other three bases would have existed. The potential for recordation of prescriptive information would have been reduced by the resulting low uncertainty of base "selection." All aspects of life manifest extraordinarily high quantities of prescriptive information. Any self-ordering (law-like behavior) or weighted-mean tendencies (reduced availability of certain bases) would have limited information retention. If non-templated dynamic chemistry predisposes higher frequencies of certain bases, how did so many highly-informational genes get coded? Any programming effort would have had to fight against a highly prejudicial self-ordering dynamic redundancy. There would have been little or no uncertainty (bits) at each locus. Information potential would have been severely constrained. Genetic sequence complexity is unique in nature "Complexity," even "sequence complexity," is an inadequate term to describe the phenomenon of genetic "recipe." Innumerable phenomena in nature are self-ordered or complex without being instructive (e.g., crystals, complex lipids, certain polysaccharides). Other complex structures are the product of digital recipe (e.g., antibodies, signal recognition particles, transport proteins, hormones). Recipe specifies algorithmic function. Recipes are like programming instructions. They are strings of prescribed decision-node configurable switch-settings. If executed properly, they become like bug-free computer programs running in quality operating systems on fully operational hardware. The cell appears to be making its own choices. Ultimately, everything the cell does is programmed by its hardware, operating system, and software. Its responses to environmental stimuli seem free. But they are merely pre-programmed degrees of operational freedom.
I hope readers get a glimpse of truth from the preceding document parts shared on why Random and/or Ordered Sequences alone cannot account for life. It takes a Code. Error-Correction cannot operate blindly, without Prescribed Knowledge. During replication we see... a) Proofreading: what to Monitor, Identify, and locate b) Edit, correct, replace damaged information c) Mismatch: corrects base mispairings, identify, cut, replace, and actually seals the gap back up at cut/replacement area d) if not repairable, apoptosis - designated cell death After replication, DNA can still be damaged. In this case Enzymes come to the rescue: Direct Reversal - reverse a reaction error back to base Base Repair - Again, Identify, cut, remove damage base and then replace with correct base, seal up the gap. Ha! Amazing. Error Correction = Design Functionality based upon prescribed information to verify and replace with correct replacement information. This lets us know the system is a communicative networked system of coded branches, loops, gotos, If-Thens, and sub-routine calls based upon decision making, processing nodes. Not done by determinant laws. In that case, you would need a specific Law for every type of different damage, signal, interaction, processing and... geesh, the Laws would crush each other. Or one Law would be undone, while the other functioned. They would cross each other off. It would be total chaos. Information requires rules based, language interface, translation and syntax structure. This is what we see. It is why they name it, call it, decipher it - The Code of Life. Encoded, multiple Codes and layers on top of another. An epigenetic Code of Regulation and Functional systems monitor, identify, organize, direct, edit, splice, sensor, recognize, send, interpret, respond, locate, correct, or heal damage. If it were merely law-like determinism, you would have Laws for each Code, each action, each reaction. Enumerable laws. Laws are general purpose qualifiers and do not create intimate, interactive control systems. This is a category mistake of pressing Order into Organization and Function. Order does not create algorithmic, programmatic functionality. It merely attains simplified, repeated sequences, like water crystals - snowflakes. The fact a code exist, multiple Codes(like Ubquitin Code - see Gpuccio's other great post) shoots down law as an explanation of life and functionally organized sequence space. For an excellent read and OP by Gpuccio on the Ubiquitin System, see here: The Ubiquitin System: Functional Complexity and Semiosis joined together. In the OP, Gpuccio post many scientific papers showing how the Ubiquitin Code Tags, Reads, Writes, Erases and does accurate Post-Translation-Modifications. None of this by blind, random processing, or by law-only, but by Code. Even Darwinist and evolutionist all recognize we have multiple Codes. If it were all deterministic, law-like order, it would shock the world of scientist working on Code who daily decipher codes in the Genome and Epigenome. We are Coded Beings, not crystals, not snowflakes. The information stored in our cells must be compressed, decompressed, transcribed, translated, proofread, error-checked, error-corrected, modified-if-tagged, and finally codified and docked with other proteins in a complex working and organizing system of multiple functions by the tens of thousands in Eukaryotes. Each constantly monitored for damage, communicated for repair or tagged for death and recycling, trillions of times a day. It is unlike any other code we know in the world and programmers only wish they could reproduce it's efficiency. Just ask Bill Gates and others. We are life with a free will to consider big questions about Life, the Universe and Everything ;-) https://www.youtube.com/watch?v=aboZctrHfK8 Now, about that Ubiquitin Code, as posted by Gpuccio, here is a another paper he posted(Upright BiPed enjoyed this). ...the lingua franca of cellular communication. The E2 ubiquitin-conjugating protein Ube2V2. "...a Rosetta Stone Bridging Redox and Ubiquitin Codes, Coordinating DNA Damage Responses" So not only is Damaged DNA recognized, but the response system is tightly controlled, highly organized and coordinated by a complete system regulator that tags proteins with specific markups. :) wow... amazing, Functional Sequence Complexity and it's happening at nano levels in billions to trillions of cells daily at incredible speeds. But I repeat myself. To know the answers, you have to ask the right questions. Otherwise, the assumptions turn up "Junk." Scientist are not turning up Laws in this De-coding work they do on a daily basis, although some laws are discovered and exist. The bulk of their work however is deciphering Code in our DNA, regulator Code, tagging code, sugar codes and more Codes. From evolutionist and Code Biology - Barberi and others, I blockquoted some more information on the many Codes... Coding Rules... they are Arbitrary - not dictated by Laws and...
What is essential in all codes is that the coding rules, although completely compatible with the laws of physics and chemistry, are not dictated by these laws.
and...
The key point is that there is no deterministic link between codons and amino acids since it has been shown that any codon can be associated with any amino acid </b<(Schimmel 1987; Schimmel et al. 1993).
Like this OP, Gpuccio does excellent work in responding to people's questions on the Ubiquitin Code. It's a great read alone as an OP, but the comments expose and unveil exquisite details of how "...functional complexity and semiosis" are "joined together" in the Ubiquitin System.DATCG
April 30, 2018
April
04
Apr
30
30
2018
07:35 AM
7
07
35
AM
PDT
GPuccio @324 Scientism is well described by Rosenberg; 'The Atheist's Guide to Reality', Ch.2. Note the implicit philosophical determinism.
THE NATURE OF REALITY: THE PHYSICAL FACTS FIX ALL THE FACTS IF WE’RE GOING TO BE SCIENTISTIC, THEN WE HAVE to attain our view of reality from what physics tells us about it. Actually, we’ll have to do more than that: we’ll have to embrace physics as the whole truth about reality. Why buy the picture of reality that physics paints? Well, it’s simple, really. We trust science as the only way to acquire knowledge. That is why we are so confident about atheism. The basis of our confidence, ironically, is the fallibility of scientists as continually demonstrated by other scientists. In science, nothing is taken for granted. Every significant new claim, and a lot of insignificant ones, are sooner or later checked and almost never completely replicated. More often, they are corrected, refined, and improved on—assuming the claims aren’t refuted altogether. Because of this error-reducing process, the further back you go from the research frontier, the more the claims have been refined, reformulated, tested, and grounded. Grounded where? In physics. Everything in the universe is made up of the stuff that physics tells us fills up space, including the spaces that we fill up. And physics can tell us how everything in the universe works, in principle and in practice, better than anything else. Physics catalogs all the basic kinds of things that there are and all the things that can happen to them. The basic things everything is made up of are fermions and bosons. That’s it. ... There is no third kind of subatomic particle. And everything is made up of these two kinds of things. Roughly speaking, fermions are what matter is composed of, while bosons are what fields of force are made of. Fermions and bosons. All the processes in the universe, from atomic to bodily to mental, are purely physical processes involving fermions and bosons interacting with one another. Eventually, science will have to show the details of how the basic physical processes bring about us, our brain, and our behavior. But the broad outlines of how they do so are already well understood.
Origenes
April 30, 2018
April
04
Apr
30
30
2018
05:45 AM
5
05
45
AM
PDT
Origenes: I agree with you. But scientism is not science. Indeed, it is an anti-scientific philosophy.gpuccio
April 30, 2018
April
04
Apr
30
30
2018
05:11 AM
5
05
11
AM
PDT
GPuccio: … I think that almost all the events that science studies at the macroscopic level, those that are well described by classical physics, are deterministic …
It seems to me that scientism — science as the only begetter of truth — assumes that all events are deterministic. Full-fledged determinism — the idea of a causally closed physical world — is presupposed by scientism/naturalism. As I have argued many times on these pages, philosophical determinism is incompatible with rationality. In short, if all our thoughts and actions are determined by entities beyond our control, then we are not rational.
GPuccio: The interventions of consciousness on matter are a possible, interesting exception. If, as I (and many others) believe the interface between consciousness and matter is at quantum level, that would allow the action of consciousness to modify matter without apparently interfering with gross determinism. That would also explain how design takes place.
At the quantum level there is no causal closure, so this is where the spiritual — intelligent design — can “break in.”Origenes
April 30, 2018
April
04
Apr
30
30
2018
05:00 AM
5
05
00
AM
PDT
Nonlin.org at #317:
Wow! How can I argue with you when you’re burying me under so many big words?
Maybe it's you who inspire me to write so much! :) Look, I really like your creativity of thought and you are a very honest discussant. And I agree with many things that you say, but I also strongly disagree with others. So, it's not that I like to contradict you, but when your creative thoughts begin to deny the essence of ID theory, which I deeply believe to be true, I feel that I have to provide my counter-arguments. In the end, I am happy that you keep your ideas, and I will keep mine. So, just to clarify what could still be not completely clear: 1. Of course. But one thing is the definition of design, another thing is the inference of design from an observed object. I use consciousness to define design. And I infer design from objective properties of the observed object. Again, you seem to conflate two different concepts. 2. I say that both order and function can be valid specifications. However, in the case of order we must be extremely careful that order is not simply the result of law (like in the case of an unfair coin which gives a series of heads). Function, instead, when implemented by a specific contingent configuration, has no such limitations. Moreover, in biology it's definitely function that we use to infer design, and not order. In the case of the watch, as explained, order of the parts and the function of measuring time can both be used to infer design, but the inference based on function is much stronger, and it implies, as necessary, the "order" of the parts. 3. No. I use determinism correctly. You use the word to mean "a worldview where only determinism exists. Again, you conflate a concept (determinism) which can pe applkied to specific contexts, with a philosophy (a merely deterministic view of reality) which is all another thing. 4a. No, your object is designed and it's the design in it that continues to operate. The design, again, is certainly based on understanding of laws, and operates using laws: the light turn on powered by solar energy because you arranged things for that to happen. It's the configuration that counts, and the configuration is there because you designed it. Gizmos don't go in orbit with solar cells and all the rest because some law makes that happen spontaneously. Moreover, I could reach your gizmo and change the design in it. I could arrange things so that the light goes on only when the moon is visible. And the gizmo would go on that other way, after my explicit design intervention. A watch goes on measuring time after it has been designed, without any other conscious intervention (at least as long as it has the energy to do that). Again, you are conflating design with law. But you are wrong. b. There is nothing poorly defined: i have given the full definitions many times, for example at #167, #199, #200. Functional information: the number of bits necessary to implement a function: -log2 of the target space/ search space ratio. Complex: if it is more than an appropriate threshold: in the general case, 500 bits. c. For any function that can be implemented by an object we can measure functional information. If that measure s more than 500 bits, for any defined function, we can infer design. We are not trying to divine the intentions of the designer. We reason on what we observe. If a complex function is there, it is designed: maybe that function was the real purpose of the designer, or it is only part of some other purpose and function. It is not important: the function is there, and it has been designed, if it is complex enough- d. What is it that you don't understand? If, say, 200 AAs must be exactly what they are, because otherwise the function is lost, then you have more than 800 bits of functional information (4.3 x 200). e. If the best function that I can imagine and define for the computer is being used as a paperweight, then I will not infer design for it, because that function is simple. It will be a false negative, like many others. f. As explained by Origenes, it seems that only you are making that specific objection. The best darwinists do is to invoke NS, which is a very indirect process with a necessity component in it, but not certainly a law. And, of course, NS cannot do what they think they do. Do you think that darwinists are more disturbed by your arguments than by ID's arguments (including mine)? Maybe, but I would not bet. So, in the end, you can remain of your mind. No problems. But your views are not compatible with ID theory, not as you express them. As for me, I stick with ID theory. Your views are certainly interesting, but under many aspects simply wrong.gpuccio
April 29, 2018
April
04
Apr
29
29
2018
07:19 AM
7
07
19
AM
PDT
I need to add @ 316 that the procedural geometry algorithms would be primitive and primitive strip building and welding and such. You could of course make procedural generation that only produced convex polytopes; but then you'd be implying that every polymer or mass of tissue that could be encoded by DNA was selectively positive.LocalMinimum
April 29, 2018
April
04
Apr
29
29
2018
06:34 AM
6
06
34
AM
PDT
Nonlin @
But let me guess: you still don’t get any buy-in from the Darwinistas :) They still say “no, what looks like a function to you is just a law of nature”, right?
Uh, no. At this forum we have seen a lot of crazy and confused arguments from "Darwinistas", but never this one — probably because biological functions do not resemble laws of nature at all.Origenes
April 28, 2018
April
04
Apr
28
28
2018
04:06 PM
4
04
06
PM
PDT
NonLin: as was discussed repeatedly above and over years, it is a fairly common challenge to have to identify something as designed without direct access to the designing agent. This is routinely done by applying a type of inductive reasoning often seen in the sciences, inference to the best empirically based explanation. Here, by establishing reliable signs of design. when such are observed we are warranted to inductively infer design as cause. In this case, various forms of functionally specific, complex organisation and associated information are such signs, backed by a trillion member observation base and the associated blind search challenge in configuration spaces. Kindly see the onward thread here: https://uncommondescent.com/intelligent-design/what-is-design-and-why-is-it-relevant/ To overturn such inference, one would need an observed counter example of FSCO/I beyond relevant thresholds observed to originate by blind chance and/or mechanical necessity. On the trillion member observation base, that has not been done. All of this accords with Newton's vera causa principle that explanations should be based on causes seen to be adequate to cause effects. Yes, actually observed. The so-called methodological naturalism principle unjustifiably sets this aside and ends up begging the question. KF PS: The common objection that cell based life reproduces does not apply to the root of the tree of life, origin of the von Neumann, coded information using kinematic self-replicator is antecedent to reproduction and is a case of FSCO/I. PPS: As a concrete example, notice how functional text is based on particular components arranged in a specific, meaningful order. Likewise, how parts are arranged in any number of systems, including biological as well as technological ones. Disordering that arrangement beyond a narrow tolerance often disrupts function. This is the island of function phenomenon. Such is anything but meaningless.kairosfocus
April 28, 2018
April
04
Apr
28
28
2018
03:50 PM
3
03
50
PM
PDT
1. Your definition of design might be simple, but we also need to identify design when we cannot observe the agent and his/her “conscious representation”.
We do it all of the time. Did you have a point?
“Complex functional information” – three words poorly defined and combined to mean something to you, but nothing to me.
They are only "poorly defined" to the willfully ignorantET
April 28, 2018
April
04
Apr
28
28
2018
03:01 PM
3
03
01
PM
PDT
gpuccio@307 Wow! How can I argue with you when you’re burying me under so many big words? :) Let me try to answer just a few of your points: 1. Your definition of design might be simple, but we also need to identify design when we cannot observe the agent and his/her “conscious representation”. 2. Sorry, I did not read Paley’s original argument so can’t comment directly on yours versus his. This is just a summary: “Paley tells of how if one were to find a watch in nature, one would infer a designer because of the structure, order, purpose, and design found in the watch.” I say “structure (=order) is enough” while you seem to say “purpose”. 3. Determinism has a certain definition everyone knows. Maybe you should use a different word if you mean something else. 4. And the main disagreement is… your claim: “Design is absolutely different, and distinguishable, from law.” 4 a. You say: “laws operate without any conscious intervention, as far as we can observe”. What if I design and send into orbit a gizmo with a light that turns On whenever the sun is in sight (powered from solar energy)? Can you see this is a law that operates without any conscious intervention 100 and 1000 years from now? b. “Complex functional information” – three words poorly defined and combined to mean something to you, but nothing to me. How can I reply? And “harnessing of specific contingent configurations” doesn’t help. c. You: “We don’t need the agent to assess function. ATP synthase can build ATP from a proton gardient in the cell”. Yes, but that seems a mechanism, not the function. In my example above, how do I know when to turn on the light? By detecting the sun rays via some mechanism. But the function of the gizmo is likely different and only the designer knows it. And what about my older example of a nonfunctional sculpture of a watch? That’s just esthetic and certainly cannot measure time but it’s still designed. d. I don’t understand what you mean: “how many specific bits (in terms of necessary AA positions) are needed for ATP synthase to work as it works?” e. You: “Instead, the complexity linked to the computer function (a function that our object can certainly implement) is very high”. But say you discover this computer cca. 1800 so you know nothing about computers. How do you do your analysis? At that time the computer looks like a paper weight at best. f. You: “If we can show even one single explicitly defined function that the object can implemet as defined, and which is complex (see next point for that), we can safely infer that the object is designed.” Perhaps. But let me guess: you still don’t get any buy-in from the Darwinistas :) They still say “no, what looks like a function to you is just a law of nature”, right? Ok. Looks like you have your method and I have mine which is much simpler... and simplicity matters as the “selfish gene” and “natural selection” soundbites show... and I account for designed art while you don't... and you might believe the laws of nature are never changing under any circumstances, but who the heck am I to tell God: "don't walk on water because of gravity"?Nonlin.org
April 28, 2018
April
04
Apr
28
28
2018
02:23 PM
2
02
23
PM
PDT
gpuccio @ 311: Thank you. We could extend the illustration by having selectable functionality be analogized by closed volumes/unions of convex polytopes (which could also be selectable by an artist). In this case, more complex configurations could be stored in more ways in the geometry buffer, i.e. the more there is to draw the more ways there is to draw it (in order if nothing else)...however, each additional vertex/draw order index can be configured to produce far more degenerate geometries (inconsistent winding orders, open shapes/shapes with unenclosed volume). Thus, the ratio of configurations that produce clean, properly closed volumes to that which produces half-invisible junk is well below unity for each additional component, and thus the relative growth of configuration space/shrinking of functionality as terms are added. We could also knock this back a level of emergence, change the domain/codomain/mapping function from the geometry data(physical config)/rendered volume(function)/shaders(physical law w/r to biological ops) to procedural geometry generation parameters(DNA)/geometry data(physical config)/procedural geometry generation algorithm(chemistry/physics w/r to emergence of DNA encoded processes) and see the same, i.e. that the number of ways to encode a structure may grow, but the functional/non-functional ratio being below unity results in shrinking targets. I expect it's pretty easy to see this shrinkage to be transitive given mapping by both of these functions or their properly ordered composite as a relation. Thus it's also true, and amplified, when mapping DNA directly to biological function.LocalMinimum
April 28, 2018
April
04
Apr
28
28
2018
08:22 AM
8
08
22
AM
PDT
ET, search challenge delivers as close a disproof as an empirically based, inductive case gets. Searching 1 in 10^60 or worse of a config space (on generous terms) and hoping to find not one but a large number of deeply isolated needles, is not going to work. In short he demands to infer to statistical miracle in the teeth of the same general sort of statistical challenge that grounds the statistical form of the second law of thermodynamics. KFkairosfocus
April 28, 2018
April
04
Apr
28
28
2018
08:10 AM
8
08
10
AM
PDT
Rumrat is over on TSZ not only equivocating but asking us to prove a negative- we need to prove that evolution cannot produce 500 bits of CSI. It isn't about evolution- see comment 308, follow and read the essay linked to. And evos are saying that evolution by means of natural selection and drift (blind and mindless processes) produced the diversity of life. That means the onus is on them to demonstrate such a thing. However they are too pathetic to understand that.ET
April 28, 2018
April
04
Apr
28
28
2018
06:36 AM
6
06
36
AM
PDT
KF: Thank you. Very good work! :)gpuccio
April 28, 2018
April
04
Apr
28
28
2018
05:29 AM
5
05
29
AM
PDT
I decided to headline the just above on defining design: https://uncommondescent.com/intelligent-design/what-is-design-and-why-is-it-relevant/ KFkairosfocus
April 28, 2018
April
04
Apr
28
28
2018
04:27 AM
4
04
27
AM
PDT
LocalMinimum: Good thoughts. In the end, the concept of contingent configurations linked to the implementation of a function is simple enough. Contingent configurations are those configurations that are possible according to operating laws. Choosing a specific contingent configuration that can implement a desired function is an act of design. If we can only observe the object, and not the design process, only the functional complexity, IOWs the utter improbability of the observed functional configuration, can allow a design inference. Simple contingent configuration can implement simple functions. But only highly specific contingent configurations can implement complex function. Highly specific contingent and functional configurations are always designed. There is no counter example in the whole known universe.gpuccio
April 28, 2018
April
04
Apr
28
28
2018
04:13 AM
4
04
13
AM
PDT
H'mm, it seems the definition of design is up again as an issue. The simplest summary I can give is: intelligently directed configuration, or if someone does not get the force of "directed," we may amplify slightly: intelligently, intentionally directed configuration. This phenomenon is a commonplace, including the case of comments or utterances by objectors; that is, the attempted denial or dismissal instantly manifests the phenomenon. Going further, we cannot properly restrict the set of possible intelligences to ourselves or our planet or even the observed cosmos, starting with the common factor in these cases: evident or even manifest contingency of being. Bring to bear that a necessary being world-root is required to answer to why a contingent world is given that circular cause and a world from utter non-being (which hath not causal power) are both credibly absurd and we would be well advised to ponder the possibility of an intelligent, intentional, designing necessary being world-root given the fine tuning issue. The many observable and empirically well-founded signs of design manifest in the world of life (starting with alphanumeric complex coded messages in D/RNA and in associated execution machinery in the cell) joined to the fine tuning of a cosmos that supports such C-Chemistry, aqueous medium cell based life suggests a unity of purpose in the evident design of cosmos and biological life. Taken together, these considerations ground a scientific project and movement that investigates, evaluates and publishes findings regarding such signs of design. Blend in the issues of design detection and unravelling in crypography, patterns of design in computing, strategic analysis, forensics and TRIZ the theory of inventive problem solving (thus also of technological evolution) and we have a wide-ranging zone of relevance. KFkairosfocus
April 28, 2018
April
04
Apr
28
28
2018
03:57 AM
3
03
57
AM
PDT
1 4 5 6 7 8 17

Leave a Reply