Intelligent Design

Defending Intelligent Design theory: Why targets are real targets, probabilities real probabilities, and the Texas Sharp Shooter fallacy does not apply at all.

Spread the love

 

 

 

The aim of this OP is to discuss in some order and with some completeness a few related objections to ID theory which are in a way connected to the argument that goes under the name of Texas Sharp Shooter Fallacy, sometimes used as a criticism of ID.

The argument that the TSS fallacy is a valid objection against ID has been many times presented by DNA_Jock, a very good discussant from the other side. So, I will refer in some detail to his arguments, as I understand them and remember them. Of course, if DNA_Jock thinks that I am misrepresenting his ideas, I am ready to ackowledge any correction about that. He can post here, if he can or likes, or at TSZ, where he is a contributor.

However, I thik that the issues discussed in this OP are of general interest, and that they touch some fundamental aspects of the debate.

As an help to those who read this, I will sum up the general structure of this OP, which will probably be rather long. I will discuss three different arguments, somewhat related. They are:

a) The application of the Texas Sharp Shooter fallacy to ID, and why that application is completely wrong.

b) The objection of the different possible levels of function definition.

c) The objection of the possible alternative solutions, and of the incomplete exploration of the search space.

Of course, the issue debated here is, as usual, the design inference, and in particular its application to biological objects.

So, let’s go.

a) The Texas Sharp Shooter fallacy and its wrong application to ID.

 

What’s the Texas Sharp Shooter fallacy (TSS)?

It is a logical fallacy. I quote here a brief description of the basic metaphor, from RationalWiki:

The fallacy’s name comes from a parable in which a Texan fires his gun at the side of a barn, paints a bullseye around the bullet hole, and claims to be a sharpshooter. Though the shot may have been totally random, he makes it appear as though he has performed a highly non-random act. In normal target practice, the bullseye defines a region of significance, and there’s a low probability of hitting it by firing in a random direction. However, when the region of significance is determined after the event has occurred, any outcome at all can be made to appear spectacularly improbable.

For our purposes, we will use a scenario where specific targets are apparently shot by a shooter. This is the scenario that best resembles what we see in biological objects, where we can observe a great number of functional structures, in particular proteins, and we try to understand the causes of their origin.

In ID, as well known, we use functional information as a measure of the improbability of an outcome.  The general idea is similar to Paley’s argument for a watch: a very high level of specific functional information in an object is a very reliable marker of design.

But to evaluate functional information in any object, we must first define a function, because the measure of functional information depends on the function defined. And the observer must be free to define any possible function, and then measure the linked functional information. Respecting these premises, the idea is that if we observe any object that exhibits complex functional information (for example, more than 500 bits of functional information ) for an explicitly defined function (whatever it is) we can safely infer design.

Now, the objection that we are discussing here is that, according to some people (for example DNA_Jock), by defining the function after we have observed the object as we do in ID theory we are committing the TSS fallacy. I will show why that is not the case using an example, because examples are clearer than abstract words.

So, in our example, we have a shooter, a wall which is the target of the shooting, and the shootin itself. And we are the observers.

We know nothing of the shooter. But we know that a shooting takes place.

Our problem is:

  1. Is the shooting a random shooting? This is the null hypothesis

or:

  1. Is the shooter aiming at something? This is the “aiming” hypothesis

So, here I will use “aiming” instead of design, because my neo-darwinist readers will probably stay more relaxed. But, of course, aiming is a form of design (a conscious representation outputted to a material system).

Now I will describe three different scenarios, and I will deal in detail with the third.

  1. First scenario: no fallacy.

In this case, we can look at the wall before the shooting. We see that there are 100 targets painted in different parts of the wall, rather randomly, with their beautiful colors (let’s say red and white). By the way, the wall is very big, so the targets are really a small part of the whole wall, even if taken together.

Then, we witness the shootin: 100 shots.

We go again to the wall, and we find that all 100 shots have hit the targets, one per target, and just at the center.

Without any worries, we infer aiming.

I will not compute the probabilities here, because we are not really interested in this scenario.

This is a good example of pre-definition of the function (the targets to be hit). I believe that neither DNA_Jock nor any other discussant will have problems here. This is not a TSS fallacy.

  1. Second scenario: the fallacy.

The same setting as above. However, we cannot look at the wall before the shooting. No pre-specification.

After the shooting, we go to the wall and paint a target around each of the different shots, for a total of 100. Then we infer aiming.

Of course, this is exactly the TSS fallacy.

There is a post-hoc definition of the function. Moreover, the function is obviously built (painted) to correspond to the information in the shots (their location). More on this later.

Again, I will not deal in detail with this scenario because I suppose that we all agree: this is an example of TSS fallacy, and the aiming inference is wrong.

  1. Third scenario: no fallacy.

The same setting as above. Again, we cannot look at the wall before the shooting. No pre-specification.

After the shooting, we go to the wall. This time, however, we don’t paint anything.

But we observe that the wall is made of bricks, small bricks. Almost all the bricks are brown. But there are a few that are green. Just a few. And they are randomly distributed in the wall.

 

 

We also observe that all the 100 shots have hit green bricks. No brown brick has been hit.

Then we infer aiming.

Of course, the inference is correct. No TSS fallacy here.

And yet, we are using a post-hoc definition of function: shooting the green bricks.

What’s the difference with the second scenario?

The difference is that the existence of the green bricks is not something we “paint”: it is an objective property of the wall. And, even if we do use something that we observe post-hoc (the fact that only the green bricks have been shot) to recognize the function post-hoc, we are not using in any way the information about the specific location of each shot to define the function. The function is defined objectively and independently from the contingent information about the shots.

IOWs, we are not saying: well the shooter was probably aiming at poin x1 (coordinates of the first shot) and point x2 (coordinates of the second shot), and so on. We just recognize that the shooter was aimin at the green bricks.  An objective property of the wall.

IOWs ( I use many IOWs, because I know that this simple concept will meet a great resistance in the minds of our neo-darwinist friends) we are not “painting” the function, we are simply “recognizing” it, and using that recognition to define it.

Well, this third scenario is a good model of the design inference in ID. It corresponds very well to what we do in ID when we make a design inference for functional proteins. Therefore, the procedure we use in ID is no TSS fallacy. Not at all.

Given the importance of this model for our discussion, I will try to make it more quantitative.

Let’s say that the wall is made of 10,000 bricks in total.

Let’s say that there are only 100 green bricks, randomly distributed in the wall.

Let’s say that all the green bricks have been hit, and no brown brick.

What are the probabilities of that result if the null hypothesis is true (IOWs, if the shooter was not aiming at anything) ?

The probability of one succesful hit (where success means hitting a green brick) is of course 0.01 (100/10000).

The probability of having 100 successes in 100 shots can be computed using the binomial distribution. It is:

10e-200

IOWs, the system exhibits 664 bits of functional information. More ore less like the TRIM62 protein, an E3 ligase discussed in my previous OP about the Ubiquitin system, which exhibits an increase of 681 bits of human conserved functional information at the transition to vertebrates.

Now, let’s stop for a moment for a very important step. I am asking all neo-darwinists who are reading this OP a very simple question:

In the above situation, do you infer aiming?

It’s very important, so I will ask it a second time, a little louder:

In the above situation, do you infer aiming? 

Because if your answer is no, if you still think that the above scenario is a case of TSS fallacy, if you still believe that the observed result is not unlikely, that it is perfectly reasonable under the assumption of a random shooting, then you can stop here: you can stop reading this OP, you can stop discussing ID, at least with me. I will go on with the discussion with the reasonable people who are left.

So, in the end of this section, let’s remind once more the truth about post-hoc definitions:

  1. No post-hoc definition of the function that “paints” the function using the information from the specific details of what is observed is correct. Those definitions are clear examples of TSS fallacy.
  2. On the contrary, any post-hoc definition that simply recognizes a function which is related to an objectively existing property of the system, and makes no special use of the specific details of what is observed to “paint” the function, is perfectly correct. It is not a case of TSS fallacy.

 

b) The objection of the different possible levels of function definition.

DNA_Jock summed up this specific objection in the course of a long discussion in the thread about the English language:

Well, I have yet to see an IDist come up with a post-specification that wasn’t a fallacy. Let’s just say that you have to be really, really, really cautious if you are applying a post-facto specification to an event that you have already observed, and then trying to calculate how unlikely that specific event was. You can make the probability arbitrarily small by making the specification arbitrarily precise.

OK, I have just discussed why post-specifications are not in themselves a fallacy. Let’s say that DNA_Jock apparently admits it, because he just says that we have to be very cautious in applying them. I agree with that, and I have explained what the caution should be about.

Of course, I don’t agree that ID’s post-hoc specifications are a fallacy. They are not, not at all.

And I absolutely don’t agree with his argument that one of the reasosn why ID’s post-hoc specifications are a fallacy would be that “You can make the probability arbitrarily small by making the specification arbitrarily precise.”

Let’s try to understand why.

So, let’s go back to our example 3), the wall with the green bricks and the aiming inference.

Let’s make our shooter a little less precise: let’s say that, out of 100 shots, only 50 hits are green bricks.

Now, the math becomes:

The probability of one succesful hit (where success means hitting a green brick) is still 0.01 (100/10000).

The probability of having 50 successes or more in 100 shots can be computed using the binomial distribution. It is:

6.165016e-72

Now, the system exhibits “only” 236 bits of functional information. Much less than in the previous example, but still more than enough, IMO, to infer aiming.

Consider that five sigma, which is ofetn used as a standard in physics to reject the nulll hypothesis , is just 3×10-7,  less than 22 bits.

Now, DNA_Jock’s objection would be that our post-hoc specification is not valid because “we can make the probability arbitrarily small by making the specification arbitrarily precise”.

But is that true? Of course not.

Let’s say that, in this case, we try to “make the specification arbitrarily more precise”, defining the function of sharp aiming as “hitting only green bricks with all 100 shots”.

Well, we are definitely “making the probability arbitrarily small by making the specification arbitrarily precise”. Indeed, we are making the specification more precise for about 128 orders of magnitude! How smart we are, aren’t we?

But if we do that, what happens?

A very simple thing: the facts that we are observing do not meet the specification anymore!

Because, of  course, the shooter hit only 50 green bricks out of 100. He is smart, but not that smart.

Neither are we smart if we do such a foolish thing, defining a function that is not met by observed facts!

The simple truth is: we cannot at all “make the probability arbitrarily small by making the specification arbitrarily precise”, as DNA_Jock argues, in our post-hoc specification, because otherwise our facts would not meet our specification anymore, and that would be completely useless and irrelevant..

What we can and must do is exactly what is always done in all cases where hypothesis testing is applied in science (and believe me, that happens very often).

We compute the probabilities of observing the effect that we are indeed observing, or a higher one, if we assume the null hypothesis.

That’s why I have said that the probability of “having 50 successes or more in 100 shots” is 6.165016e-72.

This is called a tail probability, in particular the probability of the upper tail. And it’s exactly what is done in science, in most scenarios.

Therefore, DNA_Jock’s argument is completely wrong.

c) The objection of the possible alternative solutions, and of the incomplete exploration of the search space.

c1) The premise

This is certainly the most complex point, because it depends critically on our understanding of protein functional space, which is far from complete.

For the discussion to be in some way complete, I have to present first a very general premise. Neo-darwinists, or at least the best of them, when they understand that they have nothing better to say,  usually desperately recur to a set of arguments related to the functional space of proteins. The reason is simple enough: as the nature and structure of that space is still not well known or understood, it’s easier to equivocate with false reasonings.

Their purpose, in the end, is always to suggest that functional sequences can be much more frequent than we believe. Or at least, that they are much more frequent than IDists believe. Because, if functional sequences are frequent, it’s certainly easier for RV to find them.

The arguments for this imaginary frequency of biological function are essentially of five kinds:

  1. The definition of biological function.
  2. The idea that there are a lot of functional islands.
  3. The idea that functional islands are big.
  4. The idea that functional islands are connected. The extreme form of this argument is that functional islands simply don’t exist.
  5. The idea that the proteins we are observing are only optimized forms that derive from simpler implementations through some naturally selectable ladder of simple steps.

Of course, different mixtures of the above arguments are also frequently used.

OK. let’s get rid of the first, which is rather easy. Of course, if we define extremely simple biological functions, they will be relatively frequent.

For example, the famous Szostak experiment shows that  a weak affinity for ATP is relatively common in a random library; about 1 in 1011 sequences 80 AAs long.

A weak affinity for ATP is certainly a valid definition for a biological function. But it is a function which is at the same time irrelevant and non naturally selectable. Only naturally selectable functions have any interest for the neo-darwinian theory.

Moreover, most biological functions that we observe in proteins are extremely complex. A lot of them have a functional complexity beyond 500 bits.

So, we are only interested in functions in the protein space which are naturally selectable, and we are specially interested in functions that are complex, because those are the ones about which we make a design inference.

The other three points are subtler.

  1. The idea that there are a lot of functional islands.

Of course, we don’t know exactly how many functional islands exist in the protein space, even restricting the concept of function to what was said above. Neo-darwinists hope that there are a lot of them. I think there are many, but not so many.

But the problem, again, is drastically redimensioned if we consider that not all functional islands will do. Going back to point 1, we need naturally selectable islands. And what can be naturally selected is much less than what can potentially be functional. A naturally selectable island of function must be able to give a reproductive advantage. In a system that already has some high complexity, like any living cell, the number of functions that can be immediately integrated in what already exists, is certainly strongly constrained.

This point is also stricly connected to the other two points, so I will go on with them and then try some synthesis.

  1. The idea that functional islands are big.

Of course, functional islands can be of very different sizes. That depends on how many sequences, related at sequence level (IOWs, that are part of the same island), can implement the function.

Measuring functional information in a sequence by conservation, like in the Dustron method or in my procedure many times described, is an indirect way of measuring the size of a functional island. The greater is the functional complexity of an island, the smaller is its size in the search space.

Now, we must remember a few things. Let’s take as an example an extremely conserved but not too long sequence, our friend ubiquitin. It’s 76 AAs long. Therefore, the associated search space is 20^76: 328 bits.

Of course, even the ubiquitin sequence can tolerate some variation, but it is still one of the most conserved sequences in evolutionary history. Let’s say, for simplicity, that at least 70 AAs are stictly conserved, and that 6 can vary freely (of course, that’s not exact, just an approximation for the sake of our discussion).

Therefore, using the absolute information potential of 4.3 bits per aminoacid, we have:

Functional information in the sequence = 303 bits

Size of the functional island = 328 – 303 = 25 bits

Now, a functional island of 25 bits is not exactly small: it corresponds to about 33.5 million sequences.

But it is infinitely tiny if compared to the search space of 328 bits:  7.5 x 10^98 sequences!

If the sequence is longer, the relationship between island space and search space (the ocean where the island is placed) becomes much worse.

The beta chain of ATP synthase (529 AAs), another old friend, exhibits 334 identities between e. coli and humans. Always for the sake of simplicity, let’s consider that about 300 AAs are strictly conserved, and let’s ignore the functional contraint on all the other AA sites. That gives us:

Search space = 20^529 = 2286 bits

Functional information in the sequence = 1297 bits

Size of the functional island =  2286 – 1297 = 989 bits

So, with this computation, there could be about 10^297 sequences that can implement the function of the beta chain of ATP synthase. That seems a huge number (indeed, it’s definitley an overestimate, but I always try to be generous, especially when discussing a very general principle). However, now the functional island is 10^390 times smaller than the ocean, while in the case of ubiquitin it was “just”  10^91 times smaller.

IOWs, the search space (the ocean) increases exponentially much more quickly than the target space (the functional island) as the lenght of the functional sequence increases, provided of course that the sequences always retain high functional information.

The important point is not the absolute size of the island, but its rate to the vastness of the ocean.

So, the beta chain of ATP synthase is really a tiny, tiny island, much smaller than ubiquitin.

Now, what would be a big island? It’s simple: a functional isalnd which can implement the same function at the same level, but with low functional information. The lower the functional information, the bigger the island.

Are there big islands? For simple functions, certainly yes. Behe quotes the antifreeze protein as an example example. It has rather low FI.

But are there big islands for complex functions, like that of ATP synthase beta chain? It’s absolutely reasonable to believe that there are none. Because the function here is very complex, and it cannot be implemented by a simple sequence, exactly like a functional spreadsheet software annot be written by a few bits of source code. Neo-darwinists will say that we don’t know that for certain. It’s true, we don’t know it for certain. We know it almost for certain.

The simple fact remains: the only example of the beta chain of the F1 complex of ATP synthase that we know of is extremely complex.

Let’s go, for the moment, to the 4th argument.

  1. The idea that functional islands are connected. The extreme form of this argument is that functional islands simply don’t exist.

This is easier. We have a lot of evidence that functional islands are not connected, and that they are indeed islands, widely isolated in the search space of possible sequences. I will mention the two best evidences:

4a) All the functional proteins that we know of, those that exist in all the proteomse we have examined, are grouped in abot 2000 superfamilies. By definition, a protein superfamily is a cluster of sequences that have:

  • no sequence similarity
  • no structure similarity
  • no function similarity

with all the other groups.

IOWs, islands in the sequence space.

4b) The best (and probably the only) good paper that relates an experiment where Natural Selection is really tested by an approrpiaite simulation is the rugged landscape paper:

Experimental Rugged Fitness Landscape in Protein Sequence Space

http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0000096

Here, NS is correctly simulated in a phage system, because what is measured is infectivity, which in phages is of course strictly related to fitness.

The function studied is the retrieval of a partially damaged infectivity due to a partial random substitution in a protein linked to infectivity.

In brief, the results show a rugged landscape of protein function, where random variation and NS can rather easily find some low-level peaks of function, while the original wild-type, optimal peak of function cannot realistically be found, not only in the lab simulation, but in any realistic natural setting. I quote from the conclusions:

The question remains regarding how large a population is required to reach the fitness of the wild-type phage. The relative fitness of the wild-type phage, or rather the native D2 domain, is almost equivalent to the global peak of the fitness landscape. By extrapolation, we estimated that adaptive walking requires a library size of 1070 with 35 substitutions to reach comparable fitness.

I would recommend to have a look at Fig. 5 in the paper to have an idea of what a rugged landscape is.

However, I will happily accept a suggestion from DNA_Jock, made in one of his recent comments at TSZ about my Ubiquitin thread, and with which I fully agree. I quote him:

To understand exploration one, we have to rely on in vitro evolution experiments such as Hayashi et al 2006 and Keefe & Szostak, 2001. The former also demonstrates that explorations one and two are quite different. Gpuccio is aware of this: in fact it was he who provided me with the link to Hayashi – see here.
You may have heard of hill-climbing algorithms. Personally, I prefer my landscapes inverted, for the simple reason that, absent a barrier, a population will inexorably roll downhill to greater fitness. So when you ask:

How did it get into this optimized condition which shows a highly specified AA sequence?

I reply
It fell there. And now it is stuck in a crevice that tells you nothing about the surface whence it came. Your design inference is unsupported.

Of course, I don’t agree with the last phrase. But I fully agree that we should think of local optima as “holes”, and not as “peaks”. That is the correct way.

So, the protein landscape is more like a ball and holes game, but without a guiding labyrinth: as long as the ball in on the flat plane (non functional sequences), it can go in any direction, freely. However, when it falls into a hole, it will quickly go to the bottom, and most likely it will remain there.

 

 

But:

  • The holes are rare, and they are of different sizes
  • They are distant from one another
  • A same function can be implemented by different, distant holes, of different size

What does the rugged landscape paper tell us?

  • That the wildtype function that we observe in nature is an extremely small hole. To find it by RV and NS, according to the authors, we should start with a library of 10^70 sequences.
  • That there are other bigger holes which can partially implement some function retrieval, and that are in the range of reasonable RV + NS
  • That those simpler solutions are not bridges to the optimal solution observed in the wildtype. IOWs. they are different, and there is no “ladder” that NS can use to reach the optimal solution .

Indeed, falling into a bigger hole (a much bigger hole, indeed) is rather a severe obstacle to finding the tiny hole of the wildtype. Finding it is already almost impossible because it is so tiny, but it becomes even more impossible if the ball falls into a big hole, because it will be trapped there by NS.

Therefore, to sum up, both the existence of 2000 isolated protein superfamilies and the evidence from the rugged landscape paper demonstrate that functional islands exist, and that they are isolated in the sequence space.

Let’s go now to the 5th argument:

  1. The idea that the proteins we are observing are only optimized forms that derive from simpler implementations by a naturally selectable ladder.

This is derived from the previous argument. If bigger functional holes do exist for a function (IOWs, simpler implementations), and they are definitely easier to find than the optimal solution we observe, why not believe that the simpler solutions were found first, and then opened the way to the optimal solution by a process of gradual optimization and natural selection of the steps? IOWs, a naturally selectable ladder?

And the answer is: because that is impossible, and all the evidence we have is against that idea.

First of all, even if we know that simpler implementations do exist in some cases (see the rugged landscape paper), it is not at all obvious that they exist as a general rule.

Indeed, the rugged landscape experiment is a very special case, because it is about retrieval of a function that has been only partially impaired by substituting a random sequence to part of an already existing, functional protein.

The reason for that is that, if they had completely knocked out the protein, infectivity, and therefore survival itself, would not have survived, and NS could not have acted at all.

In function retrieval cases, where the function is however kept even if at a reduced level, the role of NS is greatly helped: the function is already there, and can be optimed with a few naturally selectable steps.

And that is what happens in the case of the Hayashi paper. But the function is retrieved only very partially, and, as the authors say, there is no reasonable way to find the wildtype sequence, the optimal sequence, in that way. Because the optimal sequence would require, according to the authors, 35 AA substitutions, and a starting library of 10^70 random sequences.

What is equally important is that the holes found in the experiment are not connected to the optimal solution (the wildtype). They are different from it at sequence level.

IOWs, this bigger holes do not lead to the optimal solution. Not at all.

So, we have a strange situation: 2000 protein superfamilies, and thousand and tousands of proteins in them, that appear to be, in most cases, extremely functional, probably absolutely optimal. But we have absolutely no evidence that they have been “optimized”. They are optimal, but not necessarily optimized.

Now, I am not excluding that some optimization can take place in non design systems: we have good examples of that in the few known microevolutionary cases. But that optimization is always extremely short, just a few AAs substitutions once the starting functional island has been found, and the function must already be there.

So, let’s say that if the extremely tiny functional island where our optimal solution lies, for example the wildtype island in the rugged landscape experiment, can be found in some way, then some small optimization inside that functional island could certainly take place.

But first, we have to find that island: and for that we need 35 specific AA substitutions (about 180 bits), and 10^70 starting sequences, if we go by RV + NS. Practically impossible.

But there is more. Do those simpler solutions always exist? I will argue that it is not so in the general case.

For example, in the case of the alpha and beta chains of the F1 subunit of ATP synthase, there is no evidence at all that simpler solutions exist. More on that later.

So, to sum it up:

The ocean of the search space, according to the reasonings of neo-darwinists, should be overflowing with potential naturally selectable functions. This is not true, but let’s assume for a moment, for the sake of discussion, that it is.

But, as we have seen, simpler functions or solutions, when they exist, are much bigger functional islands than the extremely tiny functional islands corresponding to solutions with high functional complexity.

And yet, we have seen that there is absolutely no evidence that simpler solutuion, when they exist, are bridges, or ladder, to highly complex solutions. Indeed, there is good evidence of the contrary.

Given those premises, what would you expect if the neo-darwinian scenario were true? It’s rather simple: an universal proteome overflowing with simple functional solutions.

Instead, what do we observe? It’s rather simple: an universal proteome overflowing with highly functional, probably optimal, solutions.

IOWs, we find in the existing proteome almost exclusively highly complex solutions, and not simple solutions.

The obvious conclusion? The neo-darwinist scenario is false. The highly functional, optimal solutions that we observe can only be the result of intentional and intelligent design.

c2) DNA_Jock’s arguments

Now I will take in more detail DNA_Jock’ s two arguments about alternative solutions and the partial exploration of the protein space, and will explain why they are only variants of what I have already discussed, and therefore not valid.

The first argument, that we can call “the existence of alternative solutions”, can be traced to this statement by DNA_Jock:

Every time an IDist comes along and claims that THIS protein, with THIS degree of constraint, is the ONLY way to achieve [function of interest], subsequent events prove them wrong. OMagain enjoys laughing about “the” bacterial flagellum; John Walker and Praveen Nina laugh about “the” ATPase; Anthony Keefe and Jack Szostak laugh about ATP-binding; now Corneel and I are laughing about ubiquitin ligase: multiple ligases can ubiquinate a given target, therefore the IDist assumption is false. The different ligases that share targets ARE “other peaks”.
This is Texas Sharp Shooter.

We will debate the laugh later. For the moment, let’s see what the argument states.

It says: the solution we are observing is not the only one. There can be others, in some cases we know there are others. Therefore, your computation of probabilities, and therefore of functional inpormation, is wrong.

Another way to put it is to ask the question: “how many needles are there in the haystack?”

Alan Fox seems to prefer this metaphor:

This is what is wrong with “Islands-of-function” arguments. We don’t know how many needles are in the haystack. G Puccio doesn’t know how many needles are in the haystack. Evolution doesn’t need to search exhaustively, just stumble on a useful needle.

They both seem to agree about the “stumbling”. DNA_Jock says:

So when you ask:

How did it get into this optimized condition which shows a highly specified AA sequence?

I reply
It fell there. And now it is stuck in a crevice that tells you nothing about the surface whence it came.

OK, I think the idea is clear enough. It is essentially the same idea as in point 2 of my general premise. There are many functional islands. In particular, in this form, many functional islands for the same function.

I will answer it in two parts:

  • Is it true that the existence of alternative solutions, if they exist, makes the computation of functional complexity wrong?
  • Have we really evidence that alternative solutions exist, and of how frequent they can really be?

I will discuss the first part here, and say something about the second part later in the OP.

Let’s read again the essence of the argument, as summed up by me above:

” The solution we are observing is not the only one. There can be others, in some cases we know there are others. Therefore, your computation of probabilities, and therefore of functional information, is wrong.”

As it happens with smart arguments (and DNA_Jock is usually smart), it contains some truth, but is essentially wrong.

The truth could be stated as follows:

” The solution we are observing is not the only one. There can be others, in some cases we know there are others. Therefore, our computation of probabilities, and therefore of functional information, is not completely precise, but it is essentially correct”.

To see why that is the case, let’s use again a very good metaphor: Paley’s old watch. That will help to clarify my argument, and then I will discuss how it relies to proteins, in particular.

So, we have a watch. Whose function is to measure time. And, in general, let’s assume that we infer design for the watch, because its functional information is high enough to exclude that it could appear in any non design system spontaneously. I am confident that all reasonable people will agree with that. Anyway, we are assuming it for the present discussion.

 

 

Now, after having made a design inference (a perfectly correct inference, I would say) for this object, we have a sudden doubt. We ask ourselves: what if DNA_Jock is right?

So, we wonder: are there other solutions to measure time? Are there other functional islands in the search space of material objects?

Of course there are.

I will just mention four clear examples: a sundial, an hourglass, a digital clock,  an atomic clock.

The sundial uses the position of the sun. The hourglass uses a trickle of sand. The digital clock uses an electronic oscillator that is regulated by a quartz crystal to keep time. An atomic clock uses an electron transition frequency in the microwave, optical, or ultraviolet region.

None of them uses gears or springs.

Now, two important points:

  • Even if the functional complexity of the five above mentioned solutions is probably rather different (the sundial and the hourglass are probably quite simpler, and the atomic clock is probably the most complex), they are all rather complex. None of them would be easily explained without a design inference. IOWs, they are small functional islands, each of them. Some are bigger, some are really tiny, but none of them is big enough to allow a random origin in a non design system.
  • None of the four additional solutions mentioned would be, in any way, a starting point to get to the traditional watch by small functional modifications. Why? Because they are completely different solutions, based on different ideas and plans.

If someone believes differently, he can try to explain in some detail how we can get to a traditional watch starting from an hourglass.

 

 

Now, an important question:

Does the existence of the four mentioned alternative solutions, or maybe of other possible similar solutions, make the design inference for the traditional watch less correct?

The answer, of course, is no.

But why?

It’s simple. Let’s say, just for the sake of discussion, that the traditional watch has a functional complexity of 600 bits. There are at least 4 additional solutions. Let’s say that each of them has, again, a functional complexity of 500 bits.

How much does that change the probability of getting the watch?

The answer is: 2 bits (because we have 4 solutions instead of one). So, now the probability is 598 bits.

But, of course, there can be many more solutions. Let’s say 1000. Now the probability would be about 590 bits. Let’s say one million different complex solutions (this is becoming generous, I would say). 580 bits. One billion? 570 bits.

Shall I go on?

When the search space is really huge, the number of really complex solutions is empirically irrelevant to the design inference. One observed complex solution is more than enough to infer design. Correctly.

We could call this argument: “How many needles do you need to tranfsorm a haystack into a needlestack?” And the answer is: really a lot of them.

Our poor 4 alternative solutions will not do the trick.

But what if there are a number of functional islands that are much bigger, much more likely? Let’s say 50 bits functional islands. Much simpler solutions. Let’s say 4 of them. That would make the scenario more credible. Not so much, probably, but certainly it would work better than the 4 complex solutions.

OK, I have already discussed that above, but let’s say it again. Let’s say that you have 4 (or more) 50 bits solution, and one (or more) 500 bits solution. But what you observe as a fact is the 500 bits solution, and none of the 50 bits solutions. Is that credible?

No, it isn’t. Do you know how smaller a 500 bits solution is if compared to a 50 bits solution? It’s 2^450 times smaller: 10^135 times smaller. We are dealing with exponential values here.

So, if much simpler solutions existed, we would expect to observe one of them, and not certainly a solution that is 10^135 times more unlikely. The design inference for the highly complex solution is not disturbed in any way by the existence of much simpler solutions.

OK, I think that the idea is clear enough.

c3) The laughs

As already mentioned, the issue of alternative solutions and uncounted needles seems to be a special source of hilarity for DNA_Jock.  Good for him (a laugh is always a good thing for physical and mental health). But are the laughs justified?

I quote here again his comment about the laughs, that I will use to analyze the issues.

Every time an IDist comes along and claims that THIS protein, with THIS degree of constraint, is the ONLY way to achieve [function of interest], subsequent events prove them wrong. OMagain enjoys laughing about “the” bacterial flagellum; John Walker and Praveen Nina laugh about “the” ATPase; Anthony Keefe and Jack Szostak laugh about ATP-binding; now Corneel and I are laughing about ubiquitin ligase: multiple ligases can ubiquinate a given target, therefore the IDist assumption is false. The different ligases that share targets ARE “other peaks”.

I will not consider the bacterial flagellum, that has no direct relevance to the discussion here. I will analyze, instead, the other three laughable issues:

  • Szostak and Keefe’s ATP binding protein
  • ATP synthase (rather than ATPase)
  • E3 ligases

Szostak and Keefe should not laugh at all, if they ever did. I have already discussed their paper a lot of times. It’s a paper about directed evolution which generates a strongly ATP binding protein form a weakly ATP binding protein present in a random library. It is directed evolution by mutation and artificial selection. The important point is that both the original weakly binding protein and the final strongly binding protein are not naturally selectable.

Indeed, a protein that just binds ATP is of course of no utility in a cellular context. Evidence of this obvious fact can be found here:

A Man-Made ATP-Binding Protein Evolved Independent of Nature Causes Abnormal Growth in Bacterial Cells

http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0007385

There is nothing to laugh about here: the protein is a designed protein, and anyway it is no functional peak/hole at all in the sequence space, because it cannot be naturally selected.

Let’s go to ATP synthase.

DNA_Jock had already remarked:

They make a second error (as Entropy noted) when they fail to consider non-traditional ATPases (Nina et al).

And he gives the following link:

Highly Divergent Mitochondrial ATP Synthase Complexes in Tetrahymena thermophila

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2903591/

And, of course, he laughs with Nina (supposedly).

OK. I have already discussed that the existence of one or more highly functional, but different, solutions to ATP building would not change the ID inference at all. But is it really true that there are these other solutions?

Yes and no.

As far as my personal argument is concerned, the answer is definitely no (or at least, there is no evidence of them). Why?

Because my argument, repeated for years, has always been based (everyone can check) on the alpha and beta chains of ATP synthase, the main constituents of the F1 subunit, where the true catalytic function is implemented.

To be clear, ATP synthase is a very complex molecule, made of many different chains and of two main multiprotein subunits. I have always discussed only the alpha and beta chains, because those are the chains that are really highly conserved, from prokaryotes to humans.

The other chains are rather conserved too, but much less. So, I have never used them for my argument. I have never presented blast values regarding the other chains, or made any inference about them. This can be checked by everyone.

Now, the Nina paper is about a different solution for ATP synthase that can be found in some single celled eukaryotes,

I quote here the first part of the abstract:

The F-type ATP synthase complex is a rotary nano-motor driven by proton motive force to synthesize ATP. Its F1 sector catalyzes ATP synthesis, whereas the Fo sector conducts the protons and provides a stator for the rotary action of the complex. Components of both F1 and Fo sectors are highly conserved across prokaryotes and eukaryotes. Therefore, it was a surprise that genes encoding the a and b subunits as well as other components of the Fo sector were undetectable in the sequenced genomes of a variety of apicomplexan parasites. While the parasitic existence of these organisms could explain the apparent incomplete nature of ATP synthase in Apicomplexa, genes for these essential components were absent even in Tetrahymena thermophila, a free-living ciliate belonging to a sister clade of Apicomplexa, which demonstrates robust oxidative phosphorylation. This observation raises the possibility that the entire clade of Alveolata may have invented novel means to operate ATP synthase complexes.

Emphasis mine.

As everyone can see, it is absolutely true that these protists have a different, alternative form of ATP symthase: it is based on a similar, but certainly divergent, architecture, and it uses some completely different chains. Which is certainly very interesting.

But this difference does not involve the sequence of the alpha and beta chains in the F1 subunit.

Beware, the a and b subunits mentioned above by the paper are not the alpha and beta chains.

From the paper:

The results revealed that Spot 1, and to a lesser extent, spot 3 contained conventional ATP synthase subunits including ι, β, γ, OSCP, and c (ATP9)

IOWs, the “different” ATP synthase uses the same “conventional” forms of alpha and beta chain.

To be sure of that, I have, as usual, blasted them against the human forms. Here are the results:

ATP synthase subunit alpha, Tetrahymena thermophila, (546 AAs) Uniprot Q24HY8, vs  ATP synthase subunit alpha, Homo sapiens, 553 AAs (P25705)

Bitscore: 558 bits     Identities: 285    Positives: 371

ATP synthase subunit beta, Tetrahymena thermophila, (497 AAs) Uniprot I7LZV1, vs  ATP synthase subunit beta, Homo sapiens, 529 AAs (P06576)

Bitscore: 729 bits     Identities: 357     Positives: 408

These are the same, old, conventional sequences that we find in all organisms, the only sequences that I have ever used for my argument.

Therefore, for these two fundamental sequences, we have no evidence at all of any alternative peaks/holes. Which, if they existed, would however be irrelevant, as already discussed.

Not much to laugh about.

Finally, E3 ligases. DNA_Jock is ready to laugh about them because of this very good paper:

Systematic approaches to identify E3 ligase substrates

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5103871/

His idea, shared with other TSZ guys, is that the paper demonstrates that E3 ligases are not specific proteins, because a same substrate can bind to more than one E3 ligase.

The paper says:

Significant degrees of redundancy and multiplicity. Any particular substrate may be targeted by multiple E3 ligases at different sites, and a single E3 ligase may target multiple substrates under different conditions or in different cellular compartments. This drives a huge diversity in spatial and temporal control of ubiquitylation (reviewed by ref. [61]). Cellular context is an important consideration, as substrate–ligase pairs identified by biochemical methods may not be expressed or interact in the same sub-cellular compartment.

I have already commented elsewhere (in the Ubiquitin thread) that the fact that a substrate can be targeted by multiple E3 ligases at different sites, or in different sub-cellular compartments, is  clear evidence of complex specificity. IOWs, its’ not that two or more E3 ligases bind a same target just to do the same thing, they bind the same target in different ways and different context to do different things. The paper, even if very interesting, is only about detecting affinities, not function.

That should be enough to stop the laughs. However, I will add another simple concept. If E3 ligases were really redundant in the sense suggested by DNA_Jock and friends, their loss of function should not be a serious problem for us. OK, I will just quote a few papers (not many, because this OP is already long enough):

The multifaceted role of the E3 ubiquitin ligase HOIL-1: beyond linear ubiquitination.

https://www.ncbi.nlm.nih.gov/pubmed/26085217

HOIL-1 has been linked with antiviral signaling, iron and xenobiotic metabolism, cell death, and cancer. HOIL-1 deficiency in humans leads to myopathy, amylopectinosis, auto-inflammation, and immunodeficiency associated with an increased frequency of bacterial infections.

WWP1: a versatile ubiquitin E3 ligase in signaling and diseases.

https://www.ncbi.nlm.nih.gov/pubmed/22051607

WWP1 has been implicated in several diseases, such as cancers, infectious diseases, neurological diseases, and aging.

RING domain E3 ubiquitin ligases.

https://www.ncbi.nlm.nih.gov/pubmed/19489725

RING-based E3s are specified by over 600 human genes, surpassing the 518 protein kinase genes. Accordingly, RING E3s have been linked to the control of many cellular processes and to multiple human diseases. Despite their critical importance, our knowledge of the physiological partners, biological functions, substrates, and mechanism of action for most RING E3s remains at a rudimentary stage.

HECT-type E3 ubiquitin ligases in nerve cell development and synapse physiology.

https://www.ncbi.nlm.nih.gov/pubmed/25979171

The development of neurons is precisely controlled. Nerve cells are born from progenitor cells, migrate to their future target sites, extend dendrites and an axon to form synapses, and thus establish neural networks. All these processes are governed by multiple intracellular signaling cascades, among which ubiquitylation has emerged as a potent regulatory principle that determines protein function and turnover. Dysfunctions of E3 ubiquitin ligases or aberrant ubiquitin signaling contribute to a variety of brain disorders like X-linked mental retardation, schizophrenia, autism or Parkinson’s disease. In this review, we summarize recent findings about molecular pathways that involve E3 ligasesof the Homologous to E6-AP C-terminus (HECT) family and that control neuritogenesis, neuronal polarity formation, and synaptic transmission.

Finally I would highly recommend the following recent paper to all who want to approach seriously the problem of specificity in the ubiquitin system:

Specificity and disease in the ubiquitin system

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5264512/

Abstract

Post-translational modification (PTM) of proteins by ubiquitination is an essential cellular regulatory process. Such regulation drives the cell cycle and cell division, signalling and secretory pathways, DNA replication and repair processes and protein quality control and degradation pathways. A huge range of ubiquitin signals can be generated depending on the specificity and catalytic activity of the enzymes required for attachment of ubiquitin to a given target. As a consequence of its importance to eukaryotic life, dysfunction in the ubiquitin system leads to many disease states, including cancers and neurodegeneration. This review takes a retrospective look at our progress in understanding the molecular mechanisms that govern the specificity of ubiquitin conjugation.

Concluding remarks

Our studies show that achieving specificity within a given pathway can be established by specific interactions between the enzymatic components of the conjugation machinery, as seen in the exclusive FANCL–Ube2T interaction. By contrast, where a broad spectrum of modifications is required, this can be achieved through association of the conjugation machinery with the common denominator, ubiquitin, as seen in the case of Parkin. There are many outstanding questions to understanding the mechanisms governing substrate selection and lysine targeting. Importantly, we do not yet understand what makes a particular lysine and/or a particular substrate a good target for ubiquitination. Subunits and co-activators of the APC/C multi-subunit E3 ligase complex recognize short, conserved motifs (D [221] and KEN [222] boxes) on substrates leading to their ubiquitination [223–225]. Interactions between the RING and E2 subunits reduce the available radius for substrate lysines in the case of a disordered substrate [226]. Rbx1, a RING protein integral to cullin-RING ligases, supports neddylation of Cullin-1 via a substrate-driven optimization of the catalytic machinery [227], whereas in the case of HECT E3 ligases, conformational changes within the E3 itself determine lysine selection [97]. However, when it comes to specific targets such as FANCI and FANCD2, how the essential lysine is targeted is unclear. Does this specificity rely on interactions between FA proteins? Are there inhibitory interactions that prevent modification of nearby lysines? One notable absence in our understanding of ubiquitin signalling is a ‘consensus’ ubiquitination motif. Large-scale proteomic analyses of ubiquitination sites have revealed the extent of this challenge, with seemingly no lysine discrimination at the primary sequence level in the case of the CRLs [228]. Furthermore, the apparent promiscuity of Parkin suggests the possibility that ubiquitinated proteins are the primary target of Parkin activity. It is likely that multiple structures of specific and promiscuous ligases in action will be required to understand substrate specificity in full.

To conclude, a few words about the issue of the sequence space not entirely traversed.

We have 2000  protein superfamilies that are completely unrelated at sequence level. That is  evidence that functional protein sequences are not bound to any particular region of the sequence space.

Moreover, neutral variation in non coding and non functional sequences can go any direction, without any specific functional constraints. I suppose that neo-darwinists would recognize that parts of the genomes is non functional, wouldn’t they? And we have already seen elsewhere (in the ubiquitin thread discussion) that many new genes arise from non coding sequences.

So, there is no reason to believe that the functional space has not been traversed. But, of course, neutral variation can traverse it only at very low resolution.

IOWs, there is no reason that any specific part of the sequence space is hidden from RV. But of course, the low probabilistic resources of RV can only traverse different parts of the sequence space occasionally.

It’s like having a few balls that can move freely on a plane, and occasionally fall into a hole. If the balls are really few and the plane is extremely big, the balls will be able to  potentially traverse all the regions of the plane, but they will pass only through a very limited number of possible trajectories. That’s why finding a very small hole will be almost impossible, wherever it is. And there is no reason to believe that small functional holes are not scattered in the sequence space, as protein superfamilies clearly show.

So, it’s not true that highly functional proteins are hidden in some unexplored tresure trove in the sequence space. They are there for anyone to find them, in different and distant parts of the sequence space, but it is almost impossible to find them through a random walk, because they are so small.

And yet, 2000 highly functional superfamilies are there.

Moreover, The rate of appearance of new suprefamilies is highest at the beginning of natural history (for example in LUCA), when a smaller part of the sequence space is likely to have been traversed, and decreases constantly, becoming extremely low in the last hundreds of million years. That’s not what you would expect if the problem of finding new functional islands were due to how much sequence space has been traversed, and if the sequence space were really so overflowing with potential naturally selectable functions, as neo-darwinists like to believe.

OK, that’s enough. As expected, this OP is very long. However, I think that it  was important to discuss all these partially related issues in the same context.

 

488 Replies to “Defending Intelligent Design theory: Why targets are real targets, probabilities real probabilities, and the Texas Sharp Shooter fallacy does not apply at all.

  1. 1
    Nonlin.org says:

    No doubt you’re right, especially about this OP being too long 🙂

    Let me try a shortcut:
    Why are we even discussing probabilities? The smarter Darwinistas do understand that no way we’re looking at random phenomena even for something as simple as sand dunes, let alone anything in biology. But they counter that with “necessity” aka “laws of nature”.

    Of course, Dembski’s filter tries to distinguish between Regularity and Design, but this is impossible for what is Design if not Regularity? Look at any simple designed object – its shape is Regular and its code is just a set of Rules of behavior that the Designer creates.

    Then how do we demonstrate design? One solution is “First Cause”. The Designer is He that created the laws of nature.

    Another is to just observe that “natural selection” fails: http://nonlin.org/natural-selection/ and “evolution” fails: http://nonlin.org/evolution/

    Finally, if it looks designed, the default assumption should be that it is designed. Complex machines such as the circulatory system in many organisms cannot be found in the nonliving with one exception: those designed by humans. So-called “convergent evolution”, the design similarity between supposedly unrelated organisms also confirms the ‘common design’ hypothesis. Therefore, the default assumption should be that life is designed: https://en.wikipedia.org/wiki/Biosignature and https://en.wikipedia.org/wiki/Astrobiology . Until someone proves otherwise, just quit the stupidity of: “it looks designed but it is not designed”.

  2. 2
    gpuccio says:

    Nonlin.org:

    Well, thank you for the comment. 🙂

    I think we agree on the conclusions, even if not necessarily on the details of the procedure.

    I believe that discussing probabilities is important: in all empirical science a probabilistic analysis is fundamental to distinguish between signal and random noise, for example. The same problem is true for the design inference.

    Design can in a sense be considered a “law”: in the sense that it connects subjective representations and subjective experiences to an outer result.

    However, design is not a law in the sense of being a predictable regularity. Its cognitive aspect is based on understanding of meanings, including laws, but its intentional part is certainly more unpredictable.

    Random configurations do exist in reality, and the only way we can describe them is through probabilistic models.

    Finally, let’s say that if it looks desinged, we must certainly seriously consider that is could really be designed.

    But there are cases of things that look designed and are not designed. Therefore, we need rules to decide in individual cases, and ID theory is about those rules.

  3. 3
    LocalMinimum says:

    Beautifully wrought. I’d even take your name off and replace it with my own, if Google weren’t so terribly clever.

    Now to see which larger holes the politely dissenting interlocutors will land; and to see if they will, from there, travel to the wildtype; or if they’ll just settle into a comfortable orbit about their respective local minima.

  4. 4
    gpuccio says:

    LocalMinimum:

    Thank you. 🙂

    “or if they’ll just settle into a comfortable orbit about their respective local minima.”

    Maybe that’s what they are already doing…

  5. 5
    bill cole says:

    gpuccio
    Great arguments. I look forward to a lively discussion.

  6. 6
    DATCG says:

    Gpuccio,

    This should be fun, look forward to seeing the discussion
    🙂

  7. 7
    gpuccio says:

    bill cole:

    Thank you. I am sure you will give a precious contribution! 🙂

  8. 8
    gpuccio says:

    DATCG:

    Fun it should be. It has been a little tiring to write about all those points, but I hope that will help to avoid distractions and partial arguments.

  9. 9
    gpuccio says:

    This is pertinent, so I repost it here from the ubiquitin thread:

    Corneel at TSZ (about my new OP):

    “Will that be reposted here at TSZ?”

    I have posted it here. Anyone can post it, or parts of it, at TSZ. There is no copyright, it is public domain.

  10. 10
    vividbleau says:

    Thanks gp’

    Incredible effort’. Someone said jokingly that it is too long, but just because something is long does not make it too long. The word that come to mind is “thorough” lots of things to cover and hopefully you will get some good feedback from your interlocutors.

    Regardless thanks for putting all the time it took to put this together.

    Vivid
    .

  11. 11
    gpuccio says:

    vividbleau:

    Thank you, Vivid. It was a tiresome task, but it’s beautiful to know that it is appreciated! 🙂

  12. 12
    tribune7 says:

    Great post GP.

    One thing that strikes me regarding the Texas Sharpshooter is why would some think it would be more reasonable to assume the target was painted after the fact, than it was something at which he aimed.

    If someone is attempting to use it as an argument against certainty, great, but would they also respect arguments against certainty relating to Darwinism?

    The TSS as used strikes me as very similar to arguments used by Young Earthers regarding the calibration of radioactive decay used to measure long timespans.

    It certainly is not without merit and does make an important point namely that dogma and science don’t go together, but it is not any kind of rebuttal.

    At least the Young Earther can take refuge in the safety zone of faith. A Darwinist can’t unless he admits his belief is a faith.

  13. 13
    Origenes says:

    Thank you for yet another excellent post GPuccio. A very pleasant read and so many interesting points.

    I would like to comment on one minor issue, which, perhaps, deserves more attention.

    GPuccio: Of course, we don’t know exactly how many functional islands exist in the protein space, even restricting the concept of function to what was said above. Neo-darwinists hope that there are a lot of them. I think there are many, but not so many.

    What is “functional” is defined only by the organism itself. Something is functional only if it fits the need of an organism. A simple example: one cannot say “a layer of dense underfur and an outer layer of transparent guard hairs is functional.” Sure, it is indeed functional to the polar bear, but it is certainly not functional to most other creatures. The same goes for e.g. “programmed cell death protein 1”, it fits amazingly many organisms, but it is a horror thought for the vast majority of organisms on earth; certainly every prokaryote.

    GPuccio: But the problem, again, is drastically redimensioned if we consider that not all functional islands will do. Going back to point 1, we need naturally selectable islands. And what can be naturally selected is much less than what can potentially be functional. A naturally selectable island of function must be able to give a reproductive advantage.

    And what this means is that the function must fit. What is the chance that it does? Given that a random walk by a string of junk-DNA finds a “potential function” (and the means to implement it!), the question arises: IS IT FUNCTIONAL FOR THE ORGANISM?
    I think that chances are slim.
    I would like a probability number here.

  14. 14
    gpuccio says:

    Dean_from_Ohio:

    Thank you! 🙂

    I am no expert of shotguns, so I am afraid that my search for an appropriate public domain image was not so sharply aimed! 🙂

  15. 15
    gpuccio says:

    tribune7:

    Thank you! 🙂

    I think that there are no “certainties” in science, but there can certainly be very substantial theories.

    When I observe an effect where the probability of the null hypothesis in < 10^-16 (the lowest p value that R explicitly computes) I am not really worried that it is not a certainty just to be epistemologically consistent. We humans are finite creatures, and we just need "relative" certainties. Which, even if "relative", mean a lot to us.

    The inference of design for biological objects is such a "certain" thing for me. I have absolutely no doubts about it. But of course, it is still a theory, and not a fact! 🙂

    I am not really interested in the Young Earth debate, maybe because here in Italy nobody believes in a young earth (at least, nobody that I am aware of). For me it's rather natural to believe that the age of the earth is what it is considered to be by science, but of course I respect the commitments linked to a personal faith.

    IMO, however, we should not try to explicitly build ad hoc aguments to defend our religious beliefs. If they are true, they will be defended by truth itself.

  16. 16
    gpuccio says:

    Origenes:

    Thank you! 🙂

    My idea is simple: any functional island in the sequence space (of proteins, if we are discussing in particular protein function) which, if reached by some living organism can give, in that moment and in that context, a reproductive advantage to that specific organism is:

    a naturally selectable island of function.

    That simply means that, once RV in the genome of that organism in some way reaches that specific island (with all that is implied, the sequence being transcribed and translated and possibly regulated appropriately, and so on), then NS can act on that new trait: it can expand and fix it in the population (or at least, give a better probability of that), and if it is more or less fixed, negative selection can act on it to defend it from further variation.

    You say:

    I think that chances are slim.

    And I absolutely agree. There are really few functional islands that correspond to that in each context. Most of them are extremely simple (very big holes), as it is obvious in the few cases of microevolution that we know of (antibiotic resistance, and so on), where the starting function has a complexity of one or two aminoacids, and the optimization by NS can only add a few AAs.

    And even those cases need an extreme environmental pressure to really work.

    On the contrary, almost all the solutions that we observe in the existing proteomes are extremely complex.

    Another important point is the complexity of the already existing organism. As I have said many times, the more a structure is functionally complex, the less it will tolerate random modifications. Moreover, in a complex system like a living cell, only some very specific solutions, highly engineered just from the beginning, will really be able to confer a reproductive advantage by adding some new and original function. That’s why naturally selectable functional islands do exist in a complex context, but they are usually extremely complex themselves.

    In a sense, it’s Berlinski’s old argument: if you want to change a cow into a whale, you have to work a lot, and very intelligently.

    The known cases of microevolution, like antibiotic resistance, are exceptions, but there the new function is essentially a simple degradation of an existing structure, functional itself, that gives advantage because of an extreme constraint (the antibiotic in the environment). It’s Behe’s idea of burning bridges.

    In some cases, as many times discussed, some small variation in the active site of an already existing and highly complex enzyme can shift the affinitiy towards a different substrate, and again, if there is a very strong environmental pressure, that shift can be naturally selected. That could be the case for nylonase, as already discussed elsewhere.

    But the new function is always extremely simple, one or two AAs. Maybe we can occasionally fins some case with three. I don’t know. But we are more or less at the edge of what RV can realistically do, there.

    Most specific solutions in existing proteomes are highly complex. My favourite examples, the alpha and beta chains of ATP synthase, with hundreds of specific AA positions that make an extremely sophisticated and unique 3D functional structure possible, are amazing, but they are no rare exception. Indeed, many other proteins have a conservation bitscore for hundreds of million years that is much greater. Of course, the amazing aspect in the conservation of those two chains is their extremely old origin.

  17. 17
    Origenes says:

    GPuccio @17

    GPuccio: There are really few functional islands that correspond to that in each context.

    To state the obvious: each organism has its own tailor-made search space with its own tailor-made functional islands. In other words, there is no such thing as a universal search space with functional islands, which applies to all organisms.
    Put simply: for each organism goes that only a tiny subset of all biological functions can be integrated — e.g. a turtle has no use for wings and a hummingbird has no use for a (turtle) shell.

    But this obvious fact is apparently being ignored by Darwinians. Correct me if I am wrong, but it seems to me that what they envision is a (non-existent) ‘neutral organism’ that is open to any new function, no matter what it is. Such a ‘neutral organism’ can evolve in any direction while (somehow) retaining its ‘neutrality’. But logic informs us that this can only be an incoherent fantasy.

    GPuccio: In a system that already has some high complexity, like any living cell, the number of functions that can be immediately integrated in what already exists, is certainly strongly constrained.

    Another important point; the higher the complexity, the larger the search space and the fewer the functional islands. One could say, that, as complexity increases, the organism commits itself to an ever smaller domain of what is functional and what is not.

  18. 18
    gpuccio says:

    Origenes:

    Yes, very good thoughts. I agree with all that you say.

    To be even more precise, I would say that the search space is always the same, because it is the search space of all possible sequences, while functional islands (the target space) vary according to the contexts.

    Let’s also remember that we are discussing protein sequences here, not morphological traits.

    I would say that some functional islands are more “objective”: for example, enzymes are certainly amazing in themselves, because what they do at the biochemical leve is amazing.

    But, again, even the most wondeful enzyme could be completely useless, indeed often deleterious, in many specific cell contexts.

    So, a functional islands can be extremely specific and functional in a general sense, and yet not be naturally selectable in most contexts.

    You say:

    Correct me if I am wrong, but it seems to me that what they envision is a (non-existent) ‘neutral organism’ that is open to any new function, no matter what it is. Such a ‘neutral organism’ can evolve in any direction while (somehow) retaining its ‘neutrality’. But logic informs us that this can only be an incoherent fantasy.

    You are not wrong at all!

    Not only they imagine some mythic biological context where anything is possible, they also imagine mythic functions that can evovle toward incredible achievements, while all the evidence is against that.

    When their purpose is to support their silly propaganda, then any possible function becomes a treasure trove of miracles. Natural selection and its constraints is quickly forgotten, and the emphasis is only on some generic drfinition of function that can be cheaply bought, even if with some additional trick in directed evolution.

    That is the case of the Keefe and Szostak paper. They lower the concept of function to the lowest level possible, a simple and weak biochemical activity. Of coourse, they choose ATP, because that will give some grandeur to their achievement.

    So they just realize a very simple engineering of ATP binding form random libraries, and everybody is ready to claim that they have demostrated hoe functions evlove by naturally selectable ladders!

    Propaganda, and nothing else.

    Of course they find weak ATP binding, as they would have found weak binding to almost all biochemical molecules, if they had searched for it. The simple truth is that weak binding is a completely useless function.

    But, of course, ATP binding in itself is a completelyuseless function, even if string, as clearly shown by the lack of any biologically useful function in the engineered protein, the one with a strong ATP binding.

    Because, of course, a protein that just binds ATP is useless. It can only blindly subtract ATP from the cellular environment. Not a good idea.

    Functional proteins bind ATP because ATP is a repository of biochemical energy. They bind ATP and work as ATPases:

    enzymes that catalyze the decomposition of ATP into ADP and a free phosphate ion. This dephosphorylation reaction releases energy, which the enzyme (in most cases) harnesses to drive other chemical reactions that would not otherwise occur.

    (From Wikipedia)

    I remember that some further paper tried to look at some ATPase function in some furtherly directed form of Szostak’s protein, and found some minimal form of it. Unfortunately, I don’t remember the reference.

    Again, nothing really useful at the biological level.

    Because even an ATPase activity is essentially useless, if it does not “harnesses the released energy to drive other chemical reactions that would not otherwise occur”.

    So, we can, by directed evolution, build some protein that can bind ATP and, at a minimal level, convert it again to ADP, withuot any other associated result, aimlessly destroying the hard work made bt ATP synthase. How funny! 🙂

    I don’t think that NS is interested in any of that.

    So, let’s leave neo-darwinists in their self-made paradise where all can happen. We are interested only in what can happen in reality, and possibly does happen.

  19. 19
    Origenes says:

    GPuccio @19

    GPuccio: To be even more precise, I would say that the search space, is always the same, because it is the search space of all possible sequences, while functional islands (the target space) vary according to the contexts.

    Point taken! True story: I attempted to correct my mistake 6 minutes after I posted #18, but, woe is me, I was already too late.

    GPuccio: But, again, even the most wonderful enzyme could be completely useless, indeed often deleterious, in many specific cell contexts. So, functional islands can be extremely specific and functional in a general sense, and yet not be naturally selectable in most contexts.

    A fact that must be truly disheartening for the dedicated Darwinian … an infinite amount of monkeys banging away on typewriters produced a Shakespearian sonnet, but, alas, all is in vain … because it did not fit the Boeing 747 maintenance manual that came along — if you get my drift.

    GPuccio:… the Keefe and Szostak paper. They lower the concept of function to the lowest level possible, a simple and weak biochemical activity.

    We have to be very aware of the ‘darwinian’ use of the term ‘function’ and you are definitely correct in your insistence that a function is truly a function only when it is naturally selectable.
    Thank you for your response. For me, this issue has received enough attention — there is bigger fish to fry.

  20. 20
    gpuccio says:

    Origenes:

    A fact that must be truly disheartening for the dedicated Darwinian … an infinite amount of monkeys banging away on typewriters produced a Shakespearian sonnet, but, alas, all is in vain … because it did not fit the Boeing 747 maintenance manual that came along — if you get my drift.

    🙂 🙂 🙂

  21. 21
    OLV says:

    gpuccio,
    You have written a whole series of very thorough technical articles that could be compiled together into a serious scientific textbook that should replace the pseudoscientific nonsense used to teach biology in many schools.
    BTW, you write like a professor. Do you teach at a university?
    Just curious.
    Thanks.

  22. 22
    gpuccio says:

    OLV:

    Thank you! 🙂

    I am not exactly a professor, but as a medical doctor I have worked in a university environment, and done some teaching.

    I still do that in some measure.

  23. 23
    harry says:

    I think we take the arguments of the opponents of ID more seriously than they deserve to be taken. By doing so we give them a credibility they do not deserve at all.

    If you see a gymnasium floor covered with Scrabble pieces, and they are arranged such that they spell out an interesting mystery novel, you assume that they were arranged by an intelligent agent.

    But what if nobody knows how the Scrabble pieces got there? What if it simply can’t be proven empirically how they came to be arranged that way? You would have to go with what is the most likely explanation.

    If somebody insisted that boxes and boxes of Scrabble pieces were dumped out on the gym floor and the pieces just happened to land that way, even if you couldn’t prove that is not what happened, that scenario is so unlikely that it is simply irrational to assume that that is what happened. One would have to suspect that the advocates of such an explanation had some agenda or another motivating them.

    Volumes of digital information in the coding regions of DNA are the assembly instructions for intricate cellular machinery the integration of which gives rise to functional complexity beyond anything the best minds of modern science know how to build from scratch.

    There is no hard evidence whatsoever that indicates that chance combined with the laws of physics can mindlessly and accidentally compose volumes of such information. There is no good reason to assume that is the case. That immense quantity of functional information is the Scrabble piece mystery novel on the gymnasium floor. We don’t know how it was composed, but it is simply irrational to assume it happened mindlessly and accidentally.

    Those who claim it happened that way have an agenda: They are defending atheism instead of engaging in relentlessly objective, true science, which requires that they just admit that the only known source of massive quantities extremely precise, functionally complex, digitally stored information is an intellect.

  24. 24
    gpuccio says:

    harry:

    I agree with all that you say, of course. Except maybe for the first statement:

    “I think we take the arguments of the opponents of ID more seriously than they deserve to be taken. By doing so we give them a credibility they do not deserve at all.”

    You know, of course, that the vast majority of scientists in the world do accept neo-darwinism as an explanation for biological realities that is, according to them, practically beyond any doubt.

    I agree that there is a strong ideology behind that, more or less conscious, and maybe part of it is “defending atheism”, or at least the idea that science is the only true repository of truth (scientism) and that it excludes a priori some forms of explanation (reductionism).

    We also know that not only those that “defend atheism”, but also a lot of religious people do accept neo-darwinism as proven truth.

    The only thing about which I don’t agree with you is that by “taking the arguments of the opponents of ID more seriously than they deserve to be taken” we “give them a credibility they do not deserve at all”.

    We take their arguments very seriously because they are wrong, and what is wrong must be shown to be wrong, especially is everybody believes it to be true.

    And they don’t need our attention to be believed. They are already believed. At most, they need us to remain siletn, so that they can go on being believed without any disturbance.

    Look, I am not interested in their personal beliefs. I don’t care if my interlocutors are atheists or religious people. I just think that what is wrong should be recognized as wrong. Especially in science, where some objectivity at least in sharing and evaluating facts should be expected.

    So, I take their arguments very seriously, as far as they are really arguments, because some of them do not even deserve to be called that way. I take their arguments very seriously to demonstrate that they are wrong arguments.

  25. 25
    harry says:

    gpuccio @25,

    Thanks for your thoughtful response.

    I still think we take them way too seriously. ;o)

    They have resorted to believing in the existence of a virtually infinite number of flying spaghetti monster universes — without any evidentiary basis whatsoever for doing so — in order to explain away the fine-tuning of this Universe for life. Some universe had to win the fine-tuned-for-life lottery, right? Yeah. Right. And aren’t we lucky!

    Of course, everybody must take the actual existence of all those other universes on faith — a huge, blind, irrational faith.

    They deserve to be mocked.

  26. 26
    gpuccio says:

    harry:

    Yes, they deserve it. Not individually, I think we can always respect persons, and some of them are certainly in good faith (OK, not all of them! 🙂 )

    But the ideology itself, that does not deserve great respect at all. And, respect or not, it must be falsified.

  27. 27
    Origenes says:

    — Texas Sharp Shooter Fallacy —

    GPuccio: … we cannot look at the wall before the shooting. No pre-specification.
    After the shooting, we go to the wall. This time, however, we don’t paint anything.
    But we observe that the wall is made of bricks, small bricks. Almost all the bricks are brown. But there are a few that are green. Just a few. And they are randomly distributed in the wall.

    We observe (and understand) that in biological sequence space we do not have a uniform ‘wall’ where every spot has the same properties. Indeed, surely, not every DNA sequence is functional. In this wall-analogy, if each spot represents a different DNA sequence, then ‘green bricks’ are indeed an exception to the rule.

    “… however many ways there may be of being alive, it is certain that there are vastly more ways of being dead, or rather not alive.”
    — Richard Dawkins, The Blind Watchmaker

    Post-hoc or not, it is perfectly clear that not every spot on the wall is creditable with a bullseye.

    GPuccio: We also observe that all the 100 shots have hit green bricks. No brown brick has been hit.
    Then we infer aiming.

    Of course, the inference is correct. No TSS fallacy here.

    The ‘green bricks’, I imagine, stand for functional islands in a sea of non-functionality (‘brown bricks’). And this is exactly what we find in nature — organisms who possess functional sequences (green bricks) in a sea of non-functionality (brown bricks).

    And if organisms are just clumps of matter, neither interested in functionality nor in being alive, driven by random mutations, then the question “what is the chance that only ‘green bricks’ are hit by random shooting?” is fully justified.

  28. 28
    gpuccio says:

    Origenes:

    “The ‘green bricks’, I imagine, stand for functional islands in a sea of non-functionality (‘brown bricks’)”

    Yes. In particular, in my argument based on protein sequences, they stand for the islands of the functional sequences we observe in biological beings.

    Each of those targets has been “shot” in evolutionary history.

    Of course, it is possible that non functional spaces have been shot too. Indeed, RV often shoots non functional sequences. So, our situation is more like a lot of green bricks having been shot, and of course also many brown bricks. Negative selection eliminates the non functional results.

    But the complex functional islands in the search space are so tiny that not even one of them could have been found, in the available evolutionary time.

    If you look at the Table in my OP about the probabilistic resources of our biological systems on our planet, you can see that the whole bacterial system in the whole life span of earth could never find, with a very generous computation and including 5 sigma improbability, a functional island requiring 37 specific AAs.

    This is in perfect accord with the results of the rugged landscape experiment, which considers out of range for RV + NS a rather simple result: the retrieval of a partially damaged protein, requiring 35 specific AA substitutions.

    For comparison, the usual alpha and beta chains of ATP synthase show a conservation between e. coli and humans of 630 AAs, and as we have seen they form an extremely conserved, and practically unique, irreducibly complex structure which is necessary to synthesize ATP.

    If 35 specific AAs is the theorical edge of our biological planet, how much smaller is a functional island of 630 specific AAs?

    Our biological shooter must certainly use the Hubble space telescope to find his targets! 🙂

    And there is more: the realistic, empirical edge is much less than the theorical edge. We know that in all observed cases, RV can find only extremely simple starting functions: one or two AAs.

    Something more complex is probably still acceptable. Personally, I would bet that the real edge is, at best, around 5 specific AAs. Certainly, it is much lower than the theoretical, and extremely irrealistic, edge of 35-37 AAs.

    Almost all existing functional proteins have a functional complexity that is well beyond the top edge of 37 AAs (160 bits). As we have seen, most proteins and domains easily reach above at least 200 bits of functional complexity, and a lot of them exhibit humdreds or even thousands of bits of specific functional complexity. For one protein.

    And of course, in irreducible complex systems of multi-proteins, and in all regulatory networks, the complexity of the individual components multiplies. Thousands of bits are the rule, for those functions to work properly.

    The biological shooter, or shooters, is well beyond sharp: and he is really sharp, not like his counterpart in the true fallacy! 🙂

  29. 29
    Origenes says:

    GPuccio @29
    Thank you for the explanation. One short comment:

    GPuccio: Of course, it is possible that non functional spaces have been shot too. Indeed, RV often shoots non functional sequences. So, our situation is more like a lot of green bricks having been shot, and of course also many brown bricks.

    Every analogy has its limitations, but the wall-analogy is pretty good. Perhaps, in this analogy, it is also true that most molested brown bricks are in the proximity of green bricks. Because attempts of promoting total junk-DNA to actual proteins are relatively rare compared to “normal” random mutations. Secondly, because attempts to traverse the vast sea of non-functionality is beaten down by natural selection. If so, then this would bolster your suggestion that green bricks are being the target.

    Envisioning this scenario I can think of one alternative explanation instead of aiming: the green bricks contain magnets — strongly influencing the course of the bullets.
    🙂

  30. 30
    gpuccio says:

    Origenes:

    Of course every analogy has its limitations. The important thing is that the analogy must be appropriate for the aim we have using it.

    The aim of my wall-with-green-bricks analogy, as used in the OP, is simply to show, as clearly as possible, that there is a well defined class of post-hoc specifications to which the TSS fallacy does not apply at all: the class of all post-hoc specification where the specification is not painted arbitrarily, but only observed, recognized and defined, because it is based on some objective property of the observed system.

    In that sense, both the wall and the protein space are good examples of that. The green bricks are part of the wall even before the shooting, we just ackowledge that they have been shot.

    In the same way, protein functions are a reality before those proteins come into existence, because biochemical laws determine that some specific AA sequences will be able to do specific things. That connection between sequence and possible function is not painted by us, it is a consequence of definite biochemical laws.

    So, the wall analogy is perfectly apt to refute the TSS fallacy for the protein space system.

    Of course, we can refine the analogy to model better what happens in protein space.

    The main difference is that we know that in biological systems RV exists. IOWs, there is at least one shooter who does not aim, and shoots at random.

    So, the problem could be better described as follows:

    Knowing that there is some random shooting, can we still compute the probability of having all (or part) of the green bricks shot?

    This is a trivial problem of distinguishing between a signal and the associated random noise. The random shooting is the noise. Shooting green bricks is the signal.

    As in all those problems, which are the rule in science, the answer lies in a probabilistic analysis. And we know very well what probabilites say about finding complex functional islands in the protein space, even considering the existence of random variation in the measure that we know (see my table about the probabilistic resources, many times quoted here).

  31. 31
    uncommon_avles says:

    You forgot about the time factor. The green colour of the bricks may be due to moss growth. When the bullets hit the random bricks,cracks were formed which facilitated faster growth of moss on those bricks.Thus even though the bullets hit random bricks, ID believes the bullets actually hit the green bricks.

  32. 32
    Origenes says:

    Nailing the Texas sharpshooter.

    The term “independent-specification” should be preferred to “pre-specification”. To be clear, here by “independent” is meant independent from the outcome. Such a independent specification can be produced before, during or after the outcome, the only demand which must be met is that it is not informed by the outcome.
    Obviously, the outcome of a chance event matching a specification produced by someone before the event equals the probability of matching a specification produced by someone after the event but without knowledge of the outcome.

    If the specification is informed by the outcome, then, and only then, we have the Texas sharpshooter fallacy.

    A well-known case:

    Miller: One of the mathematical tricks employed by intelligent design involves taking the present day situation and calculating probabilities that the present would have appeared randomly from events in the past. And the best example I can give is to sit down with four friends, shuffle a deck of 52 cards, and deal them out and keep an exact record of the order in which the cards were dealt. We can then look back and say ‘my goodness, how improbable this is. We can play cards for the rest of our lives and we would never ever deal the cards out in this exact same fashion.’ You know what; that’s absolutely correct. Nonetheless, you dealt them out and nonetheless you got the hand that you did.

    Miller is ‘surprised’ that the outcome ( “the order in which the cards were dealt”) matches the specification informed by the outcome ( “the exact record of the order in which the cards were dealt” ). But, obviously, there is no warrant for this perplexity at all. Given some accuracy recording the outcome, everyone can perform the following cycle all day long: 1. deal cards. 2. make a “specification” based on the outcome. 3. see that outcome and specification match and express puzzlement.

    In GPuccio’s wall analogy, we have shots (outcomes) playing a vital role in the discovery of green bricks, and this may be confusing. One may wonder if this involvement means that the shots (the outcomes) inform the specification “green bricks are the target.”
    It does not mean that. The fact that the shots lead to discovery of green bricks is distinct from hypothesizing that the target is the green bricks.

    The hypothesis “green bricks are the target” is squarely based on the fact that there are green and brown bricks — note that any alternative hypothesis e.g. “the brown bricks are the target” can also be freely proposed — and not on the outcome.

  33. 33
    gpuccio says:

    uncommon_avles:

    I think you are probably kidding. My compliments for your imagination, however.

    In case you are serious, you could wonder if it is scientifically possible to distinguish between a green brick and a brown brick with moss. I suppose one close look could be enough.

    Or are you suggesting that the biochemical laws that explain protein folding and protein biochemical activities evolved in the course of time, after the sequences were found?

  34. 34
    gpuccio says:

    Origenes at #33:

    So, Miller too (is that Allan Miller at TSZ?) used the “infamous deck of cards fallacy”?

    You can find what I think about that at my comment #859 in the Ubiquitin thread, in answer to Allan Keith, who used the same argument, in different form, at TSZ.

    I just quote here my conclusion:

    “Therefore, the deck of cards fallacy is not only a fallacy: it is infamous, completely wrong and very, very silly and arrogant.

    It really makes me angry.”

    The rest can be read by all interested in the quoted comment. The “argument” does not deserve any more attention.

  35. 35
    gpuccio says:

    Origenes at #33:

    A few comments on what you say, and about the word “independent”.

    Pre-specifications are in a sense “independent” by definition. There is never any problem with them.

    Typically, pre-specification can use specific contingent information about the expected result, and not only really independent specifications, like a function.

    For example, if I say: now my goal is to get from my coin tossing the following sequence of results…

    and I mention a sequence of 100 binary values, and after that I toss the coin 100 times and get the pre-specified result, then I am getting 100 bits of specific information. Maybe I have some hidden magnet by which I can design the result, or there is some other explanation.

    The important point is: I am using the specific contingent “coordinates” of each result in the sequence, but it is legitimate, because I am doin that before the coin tossing. IOWs, I am painting targets before shooting. If I shoot them, I am really a Sharp Shooter.

    The problem arises with post-specifications. You say that they must be “independent”, and I agree. But perhaps the word “independent” can lead to some confusion. So, it’s better to clarify what it means.

    I have tried to do that in the OP with the following considerations:

    So, in the end of this section, let’s remind once more the truth about post-hoc definitions:

    No post-hoc definition of the function that “paints” the function using the information from the specific details of what is observed is correct. Those definitions are clear examples of TSS fallacy.
    On the contrary, any post-hoc definition that simply recognizes a function which is related to an objectively existing property of the system, and makes no special use of the specific details of what is observed to “paint” the function, is perfectly correct. It is not a case of TSS fallacy.

    I have added emphasis here to clarify that two separate conditions must be met to completely avoid any form of TSS fallacy:

    1) The function is recognized after the random shooting (whatever it is), and certainly its explicit definition, including the definition of the levels observed, depends on what we observe. In this sense, our definition is not “independent” from the results. But the first important requisite is that the function we observe and define must be “related to an objectively existing property of the system”. IOWs, the bricks were green before the shooting (we are not considering here the weird proposal about moss made by uncommon_avles at #32).

    In the case of protein functions, the connection with objectively existing properties of the system is even more clear. Indeed, if bricks could theorically be ppainted after the shooting, biochemical laws are not supposed to come into existence after the proteins themselves. At least, I hope that nobody, even at TSZ, is suggesting that.

    So, our first requisite is completely satisfied.

    2) The second important requisite is that we must “make no use of the specific details of what is observed to “paint” the function”. This is a little less intuitive, so I will try to explain it well.

    For “specific details” I mean here the contingent information in the result: IOWs, the coordinates of the shots in the case of the wall, or the specific AAs in the sequence in the case of proteins.

    The rule is simple, and universally appliable: if I need to know and use those specific contingent values to explicitly define the function, then, and only then, I am committing the TSS fallacy.

    Let’s see.

    To paint targets arounf the shots after the shooting, I certainly need to know where the targets are. Only in that way I can paint my target around each shot.

    So, if I define my function, after the shooting, by saying: my function is to shoot at the following coordinates. x1 …. x2 …. and so on, then I am painting my targets using my post-hoc knowledge of their specific coordinates.

    Please, note that if I had done the same thing before the shooting, then my specification would have been valid, because it would have been a pre-specification, and pre-specification can use a contingent information, because it has not yet been produced.

    Let’s go to proteins. If I look at the protein and I say: well, my specification is: a 100 AAs protein with the following sequence: …

    then I am painting a target, because I am using a sequence that has already come out. That is not correct, and I am committing the TSS fallacy.

    But if I see that the protein is a very efficient enzyme for a specific biochemical reaction, and using that observation only, and not the specific sequence of the protein (and I can be completely ignorant of it), I define my function as:

    “a protein that can implement the following enzymatic reaction” (observed function) at at least the following level (upper tail based on the observed function efficiency)”

    then my post-specification is completely valid. I am not committing any TSS fallacy. My target is a real target, and my probabilities are real probabilites.

    The same idea is expressed in this other statement from my OP:

    The difference is that the existence of the green bricks is not something we “paint”: it is an objective property of the wall. And, even if we do use something that we observe post-hoc (the fact that only the green bricks have been shot) to recognize the function post-hoc, we are not using in any way the information about the specific location of each shot to define the function. The function is defined objectively and independently from the contingent information about the shots.

    Again, you can find both the points clearly mentioned here.

  36. 36
    gpuccio says:

    Origenes:

    There is another important aspect about shooting the functional islands in the protein space, an aspect that probably has not been sufficiently emphasized in the OP.

    It’s the role of Natural Selection.

    The important point is:

    NS has absolutely no role in the process of shooting the functional islands.

    IOWs, shooting the functional islands is either explained by random shooting (RV) or by aiming (Intelligent Design). NS has no role in that part of the process.

    Why? Because the equivalent of one random shot, in the case of the protein space, is one single event that generates a different genomic sequence in one individual. IOWs, one RV event.

    Indeed, my Table about the probabilistic resources of biological systems is based exactly on that idea: how many different sequence configurations can be reached in realistic systems? Each new configuration is a new shot.

    Now, if we stick to the null hypothesis, and exclude design, each new shot is a random shot, a random event of RV. Unless a functional island is hit by one shot, NS cannot work.

    So, what is the role of NS in all that?

    It’s easy. Let’s go again to our model with balls and holes scattered in a flat plane. The ball moves by RV in the flat plane. Let’s assume for the moment that this movement is free and that it finds no obstacles and can go in any direction (IOWs, that the variation is neutral).

    Let’s say that, in the random movement, the ball finds at last one hole (a functional island). What happens?

    The ball falls into the hole (however big or deep it is), and rather quickly reaches the bottom. And stays there.

    This is the role of NS. Once a functional island has been shot (found), NS can begin to act. And it can, at least in some cases, optimize the existing function, usually by a short ladder of one AA steps. Until the bottom is reached (the function is optimized for that specific functional island).

    So, NS acts in its two characteristic ways, but only after the functional island has been found:

    a) Positive selection expands and fixes each new optimizing variation, quickly reaching the bottom of the hole. This process, as far as we know from the existing examples, is quick and short and rather simple.

    b) Negative selection, at that point, conserves the optimized result (the ball cannot go out of the hole any more).

    So, as I have said, NS has no role in the shooting (in finding the hole): it just helps in the falling of the ball to the bottom of the hole.

    The task of finding the functional islands completely relies on RV (or on design). That’s why probabilities are fundamental to distinguish between the two scenarios.

  37. 37
    bill cole says:

    gpuccio

    So, as I have said, NS has no role in the shooting (in finding the hole): it just helps in the falling of the ball to the bottom of the hole.

    Can you relate this to the Hayashi paper which says for the specific application they tested marginal function was easy but the wild type took an enormous library of functions to find.

    Is the marginal function a “hole” that can work toward the wild type or is it a hole that will work only to slightly better marginal function?

  38. 38
    jdk says:

    FYI: One time w e had a long discussion about dealing cards and specifications here: https://uncommondescent.com/intelligent-design/darwinism-why-its-failed-predictions-dont-matter/

  39. 39
    mike1962 says:

    It’s going to take a few reads to fully digest this, but nicely done. Much appreciation.

  40. 40
    Origenes says:

    GPuccio @35, @36

    GP: Is that Allan Miller at TSZ?

    No, I quoted biochemist Ken Miller from Brown University. He presented this argument succesfully at the Dover trial.

    GP: We have one event: the random generation of a 150 figures number.
    What is the probability of that event?
    It depends on how you define the probability. In all probability problems, you need a clear definition of what probability you are computing.

    You make a very important point. What is falsely suggested, by Ken Miller and others, is that an independent specification is matched.

    GP: So, if you define the problem as follows:
    “What is the probability of having exactly this result? … (and here you must give the exact sequence for which you are computing the probability)”

    Exactly right. Ken Miller tell us the exact sequence you refer to when you talk about probability, and do NOT use the outcome to produce this specification.

    GP: … then the probability is 10^-150.
    But you have to define the result by the exact contingent information of the result you have already got.
    IOWs the outcome informed your specification.
    IOWs, what you are asking is the probability of a result that is what it is.

    The ‘specification’ informed by the outcome matches the outcome. Accurately done Ken Miller, but no cigar.

    GP: That probability in one try is 1 (100%). Because all results are what they are. All results have a probability of 10^-150. That property is common to all the 10^150 results. Therefore, the probability of having one generic result whose probability is 10^-150 is 1, because we have 10^150 potential results with that property, and no one that does not have it.
    So, should we be suprised that we got one specific result, that is what it is?

    Kenny Miller acted very surprised, like this:

    Miller: We can then look back and say ‘my goodness, how improbable this is. We can play cards for the rest of our lives and we would never ever deal the cards out in this exact same fashion.’ You know what; that’s absolutely correct.

    My goodness!

    GP: Not at all. That is the only possible result. The probability is 1. No miracle, of course. Not even any special luck. Just necessity (a probabiltiy of one is necessity).

    I agree completely. I have attempted to make the exact same point in #33.

    GP: A few comments on what you say, and about the word “independent”.
    Pre-specifications are in a sense “independent” by definition. There is never any problem with them.

    I agree. However, unfortunately, obviously, no human can produce pre-specifications of e.g. functional proteins.

    GP: The problem arises with post-specifications. You say that they must be “independent”, and I agree. But perhaps the word “independent” can lead to some confusion. So, it’s better to clarify what it means.

    In #33 I offered the following clarification:

    O: To be clear, here by “independent” is meant independent from the outcome. Such an independent specification can be produced before, during or after the outcome, the only demand which must be met is that it is not informed by the outcome.

    GP: But if I see that the protein is a very efficient enzyme for a specific biochemical reaction, and using that observation only, and not the specific sequence of the protein (and I can be completely ignorant of it), I define my function as:
    “a protein that can implement the following enzymatic reaction” (observed function) at at least the following level (upper tail based on the observed function efficiency)”
    then my post-specification is completely valid. I am not committing any TSS fallacy. My target is a real target, and my probabilities are real probabilities.

    I agree. My only comment is that I still prefer the term “independent specification”. Calling it “post-specification” is confusing and also less accurate. The specification is not based on the outcome and it is therefore irrelevant if it happens before, during or after the outcome.

  41. 41
    bill cole says:

    gpuccio

    Indeed, falling into a bigger hole (a much bigger hole, indeed) is rather a severe obstacle to finding the tiny hole of the wildtype. Finding it is already almost impossible because it is so tiny, but it becomes even more impossible if the ball falls into a big hole, because it will be trapped there by NS.

    Therefore, to sum up, both the existence of 2000 isolated protein superfamilies and the evidence from the rugged landscape paper demonstrate that functional islands exist, and that they are isolated in the sequence space.

    After my re read, I see you have answered my question.

  42. 42
    gpuccio says:

    bill cole at #38 and 42:

    The Hayashi paper is about function retrieving. So, it is not about a completely new function.

    They changed one domain of the g3p protein of the phage, a 424 AAs long protein necessary for infectivity, with a random sequence of 139 AAs.

    The protein remained barely functional, and that’s what allows them to test RV and NS: the function is still there, even if greatly reduced. The phage can still survive and infect.

    An important point is that fitness is measure here as the natural logarithm of infectivity, therefore those are exponential values.

    If you look at Fig. 2, you can see that the initial infectivity is about:

    e^5 = 148

    Their best result is about:

    e^14.8 = 2,676,445

    That’s why they say that they had an increase in infectivity of about 17000 folds.

    (the numbers are not precise, I am deriving them from the Figure).

    However, the wildtype has an infectivity of about:

    e^22.4 = 5,348,061,523

    which is about 2000 times greater (from 2.6 millions to 5.3 billions).

    So, they are still far away from the function of the wildtype, and they have already reached stagnation.

    Moreover, if you look at the sequences at the bottom of the same Figure, you can see that the best result obtained has no homology to the sequence of the wildtype. As the authors say:

    “More than one such mountain exists in the fitness landscape of the function for the D2 domain in phage infectivity. The sequence selected finally at the 20th generation has ?=?0.52 but showed no homology to the wild-type D2 domain, which was located around the fitness of the global peak. The two sequences would show significant homology around 52% if they were located on the same mountain. Therefore, they seem to have climbed up different mountains.”

  43. 43
    gpuccio says:

    Origenes at #41:

    OK, I would say that we agree perfectly. 🙂

  44. 44
    gpuccio says:

    mike1962:

    Thank you very much. Your appreciation is much appreciated! 🙂

  45. 45
    gpuccio says:

    jdk:

    Thank you for the link. It seems that I did not take part in that discussion.

    At present I cannot read that long thread, because as you can see I am rather busy.

    Is there any specific argument that you would like to propose?

  46. 46
    Origenes says:

    GPuccio @37

    GP: NS has absolutely no role in the process of shooting the functional islands.
    The ball falls into the hole (however big or deep it is), and rather quickly reaches the bottom. And stays there.
    This is the role of NS. Once a functional island has been shot (found), NS can begin to act. And it can, at least in some cases, optimize the existing function, usually by a short ladder of one AA steps. Until the bottom is reached (the function is optimized for that specific functional island).
    So, NS acts in its two characteristic ways, but only after the functional island has been found …

    This is not immediately clear to me.
    At the moment that NS optimizes a function, can it be argued that NS has some influence on these “optimizing shots”?
    Assuming that each new configuration is a new shot, perhaps, one can argue that NS indirectly, by fixating the ball in the hole and steering it towards the lowest point, induces more shots to be fired in the area of the hole, rather than somewhere else?

    IOWs is there a secondary role for NS in relation to the shots fired during the optimization process? As in, NS never fires the first shot, but, instead, induces some ‘follow-up-shots’.

  47. 47
    bill cole says:

    gpuccio

    However, the wildtype has an infectivity of about:

    e^22.4 = 5,348,061,523

    which is about 2000 times greater (from 2.6 millions to 5.3 billions).

    So, they are still far away from the function of the wildtype, and they have already reached stagnation.

    Moreover, if you look at the sequences at the bottom of the same Figure, you can see that the best result obtained has no homology to the sequence of the wildtype. As the authors say:

    “More than one such mountain exists in the fitness landscape of the function for the D2 domain in phage infectivity. The sequence selected finally at the 20th generation has ?=?0.52 but showed no homology to the wild-type D2 domain, which was located around the fitness of the global peak. The two sequences would show significant homology around 52% if they were located on the same mountain. Therefore, they seem to have climbed up different mountains.”

    Amazing and helpful. Solid evidence of a separate hole that “traps” the protein away form the wild type.

  48. 48
    bill cole says:

    gpuccio
    There are a couple of responses in the TSZ ubiquitin thread. I responded to DNA jock briefly but your comments would be greatly appreciated.

  49. 49
    gpuccio says:

    Origenes:

    No, that was not what I meant. NS acts only on the shot that has already found a functional island, because it needs an existing, naturally selectable function to act.

    Going back to the wall metaphor, it’s as is the shots that hit a green brick, and only those that hit a green brick, become in some way “centered” after the hit: so, even if they hit the green brick, say, in a corner, there is a mechanism that moves the bullet to the center.

    That does not happen to the shots that hit the brown bricks.

    So, NS is a mechanisms that works in the protein space, but not as a rule in the wall (I am aware of no mechanisms that centers the bullet after the shot).

    As you said yourself, every analogy has its limitations!

    One important difference if that the wall model is a random search, while the search in protein space is a random walk. That does not change much in terms of probabilities, but the models are different.

    So, the model of the ball and holes corresponds better to what happens in protein space.

    The ball is some sequence, possibly non coding, that changes through neutral variation (it can go in any direction, on the flat plane). As we have said many times, this is the best scenario for finding a new functional island, because already existing functional sequences are already in a hole, and it is extremely difficult for them to move away from it.

    So, the ball can potentially explore all the search space by neutral variation, but of course it has not the resources to explore all possible trajectories.

    The movement of the ball is the random walk. We can thing of each new state tested as a discrete movement. Most movements (aminoacid substitutions) make the ball move gradually through the protein space, by small shifts, but some types of variation (indels, frameshifts, and so on) can make it move suddenly to different parts of the space. However, each new state is a new try, that can potentially find a hole, but only according to the probabilites of finding it.

    If a hole is found, and a naturally selectable function appears, then the ball falls in the hole, and most likely its movement will be confined in the functional island itself, until optimization is reached. The higher the optimization, the more difficult it will be for the ball to go out of the hole and start again a neutral walk.

    A random search and a random walk are two different kinds of random systems, that have many things in common but are different for some aspects. However, essentially the probabilistic computation is not really different: if a target is extremely improbable in a random search (the shooting) it is also extremely improbable in a random walk (the ball), provided of course that the walk does not start from a position near the target: all that is necessary is that the starting position must be unrelated at sequence level, as is the case, for example, for all the 2000 protein superfamilies.

    Even in the case where an already functional protein undergoes a sudden functional transition which is in itself complex, like for example in the transition to vertebrates, there is no difference. The fact that the whole protein already had part of the functional information that will be conserved up to humans before the transition does not help to explain the appearence of huge new amounts of specific sequence homology to the human form. Again, the random walk is from an unrelated sequence (the part of the molecule that had no homology with the human form) to a new functional hole (the new functional part of the sequence that appears in vertebrates and has high homology to the human form, and that will be conserved from then on).

    The important point is that the functional transition must be complex: as I have said many times, there is no difference, probabilistically, if we build a completely new protein which has 500 bits of human conserved functional information, or if we add 500 bits of human conserved functional information to a protein that already exhibited 300 bits of it, and then goes to 800 bits in the transition. In both cases, we are generating 500 new and functional bits of human conserved information that did not exist before, starting from an unrelated sequence, or part of sequence.

  50. 50
    gpuccio says:

    bill cole:

    I have read the comment by DNA_Jock.

    April 15, 2018 at 9:12 pm

    What a disappointment. Seriously.

    He does not want to discuss “over the fence”. OK, his choice. Therefore, I will not address him directly, too. I can discuss over the fence, and I have done exactly that, but I don’t like to shout over the fence to someone who has already declared that he will not respond.

    Moreover, he points to our old exchanges instead of dealing with my arguments here. Again, his choice. But I will not go back to re-read the past. I have worked a lot to present my arguments together, and in a new form, and I will answer only to those who deal with the things I have said here.

    He seems offended that I have added the:

    “ATP synthase (rather than ATPase)”

    clarification.

    Of course he will not believe it, but I have done that only to avoid equivocations. All the discussions here have been about ATP synthase, that I have always called by that name. The official name of the beta chain that I discuss (P06576) is, at Uniprot:

    “ATP synthase subunit beta, mitochondrial”

    Of course ATP synthase is also an ATPase, because it can work in both directions. But the term ATPase is less specific, because there are a lot of ATPases that are in no way ATP synthases. See Wikipedia for a very simple reference:

    ATPase

    https://en.wikipedia.org/wiki/ATPase

    So, it was important to clarify that I was of course speaking of ATP synthase, instead of intentionally generating confusion, as he has tried to do.

    He does not answer my criticism to his level of definiton argument (the things he says are no answer at all, as anyone can check). Again, his choice.

    But it is really shameful that he has not even mentioned my argument that his argument about my argument about the alpha and beta chains of ATP synthase is completely wrong.

    As I have said, the alpha and beta chains of ATP synthase are the same in Alveolata as in all other organisms. So he is wrong, I have clearly said why, quoting the same paper that he linked, and he does not even mention the fact.

    He is simply ridiculous about my argument regarding time measuring systems.

    “omits the water clock and the candle clock”. I cannot believe that he says that!

    Just for the record, this is from the OP:

    So, we wonder: are there other solutions to measure time? Are there other functional islands in the search space of material objects?

    Of course there are.

    I will just mention four clear examples: a sundial, an hourglass, a digital clock, an atomic clock.

    Emphasis added.

    Is this “whining”? Is this “ignorance or lack of attention” that is “leading me to underestimate the number of other possible ways of achieving any function”?

    You judge. Again, I quote from my OP:

    Does the existence of the four mentioned alternative solutions, or maybe of other possible similar solutions, make the design inference for the traditional watch less correct?

    The answer, of course, is no.

    But why?

    It’s simple. Let’s say, just for the sake of discussion, that the traditional watch has a functional complexity of 600 bits. There are at least 4 additional solutions. Let’s say that each of them has, again, a functional complexity of 500 bits.

    How much does that change the probability of getting the watch?

    The answer is: 2 bits (because we have 4 solutions instead of one). So, now the probability is 598 bits.

    But, of course, there can be many more solutions. Let’s say 1000. Now the probability would be about 590 bits. Let’s say one million different complex solutions (this is becoming generous, I would say). 580 bits. One billion? 570 bits.

    Shall I go on?

    When the search space is really huge, the number of really complex solutions is empirically irrelevant to the design inference. One observed complex solution is more than enough to infer design. Correctly.

    We could call this argument: “How many needles do you need to tranfsorm a haystack into a needlestack?” And the answer is: really a lot of them.

    Our poor 4 alternative solutions will not do the trick.

    That said, I am really happy that he does not want to “shout over the fence”. This is very bad shouting, arrogant evasion, and certainly not acceptable behaviour from someone who is certainly not stupid.

    Just to be polite, good by to him.

  51. 51
    gpuccio says:

    bill cole:

    As you have probably noticed, Rumraket:

    April 15, 2018 at 9:31 pm

    is just reciting again the infamous deck of cards fallacy.

    I will not waste my time with him, repeating what I have already said (see #35 here and #859 in the Ubiquitin thread).

  52. 52
    gpuccio says:

    Origenes:

    Please. notice how Rumrachet aty TSZ has given us a full example of the fallacy I have described:

    But that’s silly, because all sufficiently long historical developments will look unbelievably unlikely after the fact. To pick an example, take one of the lineages in the Long Term Evolution experiment with E coli. In this lineage, over 600 particular mutations have accumulated in the E coli genome over the last 25 years. What is the likelihood of that particular collection of mutations?

    Emphasis mine.

    He is clearly violating my second fundamental requisite to avoid the TSS fallacy, as explained both in the OP and in my discussion with you at #36:

    “2) The second important requisite is that we must “make no use of the specific details of what is observed to “paint” the function”. This is a little less intuitive, so I will try to explain it well.

    For “specific details” I mean here the contingent information in the result: IOWs, the coordinates of the shots in the case of the wall, or the specific AAs in the sequence in the case of proteins.

    The rule is simple, and universally appliable: if I need to know and use those specific contingent values to explicitly define the function, then, and only then, I am committing the TSS fallacy.”

    Rumracket is doing exactly that: he is using the specific contingent values in a post-hoc specification. So, he is committing a fallacy that ID never commits.

    A good demonstration of my point. Should I thank him? 🙂

  53. 53
    Origenes says:

    GPuccio @53

    GP: Rumracket is doing exactly that: he is using the specific contingent values in a post-hoc specification. So, he is committing a fallacy that ID never commits.

    Yes that is exactly what he does. It is Ken Miller’s mistake all over again.

    “What is the likelihood of that particular collection of mutations?”, Rumracket asks. In return I would like to ask him: “What probability are you attempting to compute?” And as a follow-up question: “Are we talking about the probability that the outcome matches a specification informed by the outcome? If so, then the chance is 100%.”

  54. 54
    bill cole says:

    gpuccio

    Moreover, he points to our old exchanges instead of dealing with my arguments here. Again, his choice. But I will not go back to re-read the past. I have worked a lot to present my arguments together, and in a new form, and I will answer only to those who deal with the things I have said here.

    I looked over the old exchanges and his use of the TSS was fallacious. You are comparing protein sequence data over different species which seems to have nothing to do with the TSS fallacy.

    I am grateful that his challenge that got you to write this excellent op which was very educational for me especially the highlights you made on the Hayashi paper.

    Rumraket usually backs up his claims. I agree that his argument was based on a straw-man fallacy but honestly I think thats the best he can do. The data here is very problematic to the Neo-Darwinian position.

    The TSS claim was also a fallacy and a cleaver argument by Jock but again it misrepresented your claims.

    Joe Felsenstein said he would not comment on the TSS op but would write an op addressing your definition of information. I look forward to his op and hope that it generates a more productive discussion between UD and TSZ.

    From his lecture I do believe that he understands the challenge that genetic information brings to understanding the cause of the diversity of living organisms.

    Again, thank you so much for this clearly written op.:-)

  55. 55
    uncommon_avles says:

    gpuccio @ 34,
    I don’t think analogy of bricks and bullets works. The point is, in biological process, you need to take time factor and incremental probability into account. The green colour of brick might not be paint. It might be moss formed over a few months’ time. If you see a property of a biological system which seems improbable at first glance, you should consider the fact that the property might have evolved over time from other dissimilar properties. Thus in the flagellum of the E. coli bacterium, there are around 40 different kinds of proteins but only 23 of these proteins are common to all the other bacterial flagella . Of these 23 proteins just two are unique to flagella. The others all closely resemble proteins that carry out other functions in the cell. This means that the vast majority of the components needed to make a flagellum might already have been present in bacteria before this structure appeared

  56. 56
    tribune7 says:

    Origenes, great point:

    Given some accuracy recording the outcome, everyone can perform the following cycle all day long: 1. deal cards. 2. make a “specification” based on the outcome. 3. see that outcome and specification match and express puzzlement.

    Now, imagine the exact sequence had been predicted before hand. Would they still say it was by chance?

    What if someone took that deck and rather than dealing them, just built a house of cards? Would they claim as within the realm of chance?

  57. 57
    gpuccio says:

    uncommon_avles:

    Thank you for your comment.

    What you say is not really connected to the discussion here, but I will answer your points.

    I don’t think analogy of bricks and bullets works.

    As already discussed with Origenes at #31, #36 and #50, the bricks analogy in the OP has only one purpose: to ahow that there is a class of systems and events to which the TSS fallacy does not apply.

    The wall with the green bricks and the protein functions both belong to that class. for exactly the same reasons, that I have explicitly discussed (my two requirements), as detailed in the OP and at #36:

    1) The function is recognized after the random shooting (whatever it is), and certainly its explicit definition, including the definition of the levels observed, depends on what we observe. In this sense, our definition is not “independent” from the results. But the first important requisite is that the function we observe and define must be “related to an objectively existing property of the system”. IOWs, the bricks were green before the shooting (we are not considering here the weird proposal about moss made by uncommon_avles at #32).

    In the case of protein functions, the connection with objectively existing properties of the system is even more clear. Indeed, if bricks could theorically be ppainted after the shooting, biochemical laws are not supposed to come into existence after the proteins themselves. At least, I hope that nobody, even at TSZ, is suggesting that.

    So, our first requisite is completely satisfied.

    2) The second important requisite is that we must “make no use of the specific details of what is observed to “paint” the function”. This is a little less intuitive, so I will try to explain it well.

    For “specific details” I mean here the contingent information in the result: IOWs, the coordinates of the shots in the case of the wall, or the specific AAs in the sequence in the case of proteins.

    The rule is simple, and universally appliable: if I need to know and use those specific contingent values to explicitly define the function, then, and only then, I am committing the TSS fallacy.

    This is the purpose of the analogy. For that purpose, it is perfectly appropriate.

    For the rest, of course, it is not a model of a biological system.

    Then you say:

    The point is, in biological process, you need to take time factor and incremental probability into account.

    I don’t know what you mean by “incremental probability”, I suppose you mean Natural Selection.

    Of course I take into account the time factor and NS in all my discussions about biological systems. You can find my arguments here:

    What are the limits of Natural Selection? An interesting open discussion with Gordon Davisson

    https://uncommondescent.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/

    and here:

    What are the limits of Random Variation? A simple evaluation of the probabilistic resources of our biological world

    https://uncommondescent.com/intelligent-design/what-are-the-limits-of-random-variation-a-simple-evaluation-of-the-probabilistic-resources-of-our-biological-world/

    In the second OP, just at the beginnign, you can find a Table with the computation of the rpobabilistic resources of biological systems on our planet, for its full lifetime.

    All those points of course are extremely important, and I have discussed them in great detail.

    But they have nothing to do with the TSS fallacy argument, which is the issue debated here.

    So, we have discovered two great truths:

    a) Any analogy has its limitations

    b) I cannot discuss everything at the same time

    The green colour of brick might not be paint. It might be moss formed over a few months’ time.

    As already said, your point is wrong and not pertinent. there are of course ways to distinguish between a painted brick and moss.

    The important question is: is the propert I am using to recognize the function post-hoc an objective property of the system, or am I inventing it now?

    For protein function, there is absolutely no doubt: the function of a protein is the strict consequence of biochemical laws. I am inventing neither the laws nor the observed function. They are objective properties of the system of our observable universe.

    So, while you can still have some rather unreasonable doubt for the green brick (the “moss” alternative), there can be no doubt for the protein function.

    Your idea is probably that the protein could have acquired the function gradually, by a process of RV and NS. But that is not an argument about the TSS fallacy, as I have already explained.

    The problem here is: can we reject the null hypothesis of a random origin using a specification post-hoc?

    And the clear answer is: yes, of course, but we have to respect these two requirements (see above).

    The probabilistic analysis has only one purpose: to reject a random origin.

    Mechanisms like NS must be evaluated in other ways, considering what they can do and what they cannot do in the observed system. As I have done both in my previous OPs and here.

    If you see a property of a biological system which seems improbable at first glance, you should consider the fact that the property might have evolved over time from other dissimilar properties.

    Of course, and I have done that a lot of times. But, as said, that has nothing to do with the TSS fallacy.

    The problem in the TSS fallacy is: is the property I an using in my reasoning an objective property, for which I can build a probabilistic analysis of the hypothesis of a random origin (of course also considering, if apporpriate, the role of necessity factors, like NS), or is it a “painted” property, one that did not exist before observing what I am observing?

    You are conflating different arguments here. I have discussed all of them, but, as said before, not all at the same time.

    Finally you say:

    Thus in the flagellum of the E. coli bacterium, there are around 40 different kinds of proteins but only 23 of these proteins are common to all the other bacterial flagella . Of these 23 proteins just two are unique to flagella. The others all closely resemble proteins that carry out other functions in the cell. This means that the vast majority of the components needed to make a flagellum might already have been present in bacteria before this structure appeared

    This is the old (and wrong) argument against Irreducible Complexity. Again, it’s another argument, and has nothing to do with the TSS fallacy.

    Moreover, I have not used IC in this OP and in this discussion as a relevant argument. My exanples are essentially about the functional complexity of single proteins, for example the alpha and beta chains of ATP synthase.

    But of course the system made by those two proteins together is certainly irreducibly complex. Each of the two proteins is powerless without the other. But each of the two proteins is also functionally complex of its own merit.

    However, the discussion here is not about IC. Again, you conflate different arguments without any reason to do that.

  58. 58
    gpuccio says:

    Origenes at #54:

    “Yes that is exactly what he does. It is Ken Miller’s mistake all over again.”

    Yes, sometimes our kind interlocutors really help us.

    Seriously, I am really amazed that they are still using the infamous deck of cards fallacy! What’s wrong in their minds?

    At least DNA_Jock has avoided that intellectual degradation.

    At least up to now… 🙂

  59. 59
    gpuccio says:

    bill cole:

    Thank you for the kind words! 🙂

    I am looking forward to Joe Felsestein’s clarifications. He seems to be one of the last people there willing to discuss reasonably.

  60. 60
    gpuccio says:

    To all:

    Not much at TSZ.

    Entropy continues to confound the problem of TSS fallacy with the problem of alternative solutions. I have discussed them both in the OP, but he seems not to be aware of that.

    Just to help him understand:

    a) The problem of the TSS fallacy is: is the post-hoc specification valid, and when? I have abswered that problem very clearly: any post-hoc specification is valid if the two requisites I have described are satisfied. In that case, there is no TSS fallacy.

    My two requisites are always satisfied in the ID inferences, therefore the TSS fallacy does not apply to the ID inference.

    b) Then there is the problem of how to compute the probability of the observed function. Entropy thinks that this is too part of the TSS fallacy, because he follows the wrong reasoning of DNA_Jock. But that has nothing to do with the fallacy itself. At most, it is a minor problem of how to compute probabilities.

    I have clearly argued that with huge search spaces, and with highly complex solutions, that problme is irrelevant. We can very well compute the specificity of the observed solution, and ignore other possible complex solutions, which would not change the result in any significant way for our purposes. DNA_Jock and Entropy can disagree, but I have discussed the issue, and given my reasons. Everyone can judge for himself.

    Then there is the issue of the level at whuch the function must be defined. I have clearly stated that the only correct scientific approach is to define as rejection region the upper taile of the observed effect, as everybody ddoes in hypothesis testing. DNA_Jock does not like my answer, but he has not explained why. He also gives cryptic allusions to some different argument that I could have used, but of course he does not say what it is. And, of course, I suppose that he laughs. Good for him.

    Finally, Entropy, like DNA_Jock, seems not to have understood the simple fact that the alpha and beta chains of ATP synthase have the same conserved sequence in Alveolata as in all other organisms. Could someone please explain to these people that I have discussed that issue in the OP, with precise references from the literature that they had linked? If they think that I am wrong, I am ready to listen to their reasons.

  61. 61
    Origenes says:

    Tribune7 @57

    T7: Now, imagine the exact sequence had been predicted before hand. Would they still say it was by chance?

    I do like your ‘simple’ question. Indeed, suppose a card dealer successfully specifies beforehand which cards Miller will get, would Miller accept this as a chance event? What if the card dealer gets it right every time all day long? What would Miller say? That, without design, this is impossible, perhaps?

    If so, Miller needs to explain his position, since Miller assigns the same probability to pre-specified and post-specified events. He lumps those two categories together.

    And here lies Miller’s obvious mistake. In his example (see quote in #33) he smuggles in the specification.

  62. 62
    gpuccio says:

    Tribune7 and Origenes:

    As I have argued at #36, pre-specifications are always valid, and they can use any contingent information, because of course that contingent information does not derive from any random event that has already happened.

    Post-specifications, instead, are valid only if they are about objective properties and if they don’t use any already existing contingent information.

    If a specification is valid, only the complexity of the specification matters for a design inference. If the complexity is the same, there is absolutely no difference between a pre-specification and a valid post-specification.

  63. 63
    tribune7 says:

    O & GP

    If so, Miller needs to explain his position, since Miller assigns the same probability to pre-specified and post-specified events. He lumps those two categories together.

    If a specification is valid, only the complexity of the specification matters for a design inference. If the complexity is the same, there is absolutely no difference between a pre-specification and a valid post-specification.

    Suppose after dealing a deck all day, one particular sequence occurs which causes the lights to come on, music to start playing and a clown to come in with a cake. Could that reasonably considered to be a chance event?

    Any arrangement of the chemistry of the genetic code has a equally minuscule probability but only if it is done in a specific way does something happen.

  64. 64
    gpuccio says:

    tribune7:

    Of course. The key concept is always the complexity that is necessary to implement the function.

    A very interesting example to understand better the importance of the functional complexity of a sequence, and why complexity is not additive, can be foun in the Ubiquitin thread, in my discussion with Joe Felsestein, from whom we are waiting for some more detailed answer. It’s the thief scenario.

    See here:

    The Ubiquitin System: Functional Complexity and Semiosis joined together.

    https://uncommondescent.com/intelligent-design/the-ubiquitin-system-functional-complexity-and-semiosis-joined-together/#comment-656365

    #823, #831, #859, #882, #919

    I paste here, for convenience, the final summary of the mental experiment, from comment #919 (to Joe Felsestein):

    The thief mental experiment can be found as a first draft at my comment #823, quoted again at #831, and then repeated at #847 (to Allan Keith) in a more articulated form.

    In essence, we compare two systems. One is made of one single object (a big safe). the other of 150 smaller safes.

    The sum in the big safe is the same as the sums in the 150 smaller safes put togethjer. that ensures that both systems, if solved, increase the fitness of the thief in the same measure.

    Let’s say that our functional objects, in each system, are:

    a) a single piece of card with the 150 figures of the key to the big safe

    b) 150 pieces of card, each containing the one figure key to one of the small safes (correctly labeled, so that the thief can use them directly).

    Now, if the thief owns the functional objects, he can easily get the sum, both in the big safe and in the small safes.

    But our model is that the keys are not known to the thief, so we want to compute the probability of getting to them in the two different scenarios by a random search.

    So, in the first scenario, the thief tries the 10^150 possible solutions, until he finds the right one.

    In the second scenario, he tries the ten possible solutions for the first safe, opens it, then passes to the second, and so on.

    A more detailed analysis of the time needed in each scenario can be found in my comment #847.

    So, I would really appreciate if you could answer this simple question:

    Do you think that the two scenarios are equivalent?

    What should the thief do, according to your views?

    This is meant as an explicit answer to your statement mentioned before:

    “That counts up changes anywhere in the genome, as long as they contribute to the fitness, and it counts up whatever successive changes occur.”

    The system with the 150 safes corresponds to the idea of a function that include changes “anywhere in the genome, as long as they contribute to the fitness”.

    The system with one big safe corresponds to my idea of one single object (or IC system of objects) where the function (opening the safe) is not present unless 500 specific bits are present.

  65. 65
    gpuccio says:

    To all:

    I have just posted this comment in the Ubiquitin thread. I think it is relevant to the discussion here, too, because E3 ligases are one of the examples proposed by DNA_Jock. So, I copy it here too:

    This recent paper is really thorough, long and detailed. It is an extremely good summary about what is known of the role of ubiquitin in the regulation of the critical pathway of NF-?B Signaling, of which we have said a lot during this discussion:

    The Many Roles of Ubiquitin in NF-kB Signaling

    http://www.mdpi.com/2227-9059/6/2/43/htm

    I quote just a few parts:

    Abstract:

    The nuclear factor kB (NF-kB) signaling pathway ubiquitously controls cell growth and survival in basic conditions as well as rapid resetting of cellular functions following environment changes or pathogenic insults. Moreover, its deregulation is frequently observed during cell transformation, chronic inflammation or autoimmunity. Understanding how it is properly regulated therefore is a prerequisite to managing these adverse situations. Over the last years evidence has accumulated showing that ubiquitination is a key process in NF-kB activation and its resolution. Here, we examine the various functions of ubiquitin in NF-kB signaling and more specifically, how it controls signal transduction at the molecular level and impacts in vivo on NF-kB regulated cellular processes.

    —

    Importantly, the number of E3 Ligases or DUBs mutations found to be associated with human pathologies such as inflammatory diseases, rare diseases, cancers and neurodegenerative disorders is rapidly increasing [22,23,24]. There is now clear evidence that many E3s and DUBs play critical roles in NF-kB signaling, as will be discussed in the next sections, and therefore represent attractive pharmacological targets in the field of cancers and inflammation or rare diseases.

    —

    3.3. Ubiquitin Binding Domains in NF-kB Signaling
    Interpretation of the “ubiquitin code” is achieved through the recognition of different kinds of ubiquitin moieties by specific UBD-containing proteins [34]. UBDs are quite diverse, belonging to more than twenty families, and their main characteristics can be summarized as follows: (1) They vary widely in size, amino acid sequences and three-dimensional structure; (2) The majority of them recognize the same hydrophobic patch on the beta-sheet surface of ubiquitin, that includes Ile44, Leu8 and Val70; (3) Their affinity for ubiquitin is low (in the higher µM to lower mM range) but can be increased following polyubiquitination or through their repeated occurrence within a protein; (4) Using the topology of the ubiquitin chains, they discriminate between modified substrates to allow specific interactions or enzymatic processes. For instance, K11- and K48-linked chains adopt a rather closed conformation, whereas K63- or M1-linked chains are more elongated.
    In the NF-kB signaling pathway, several key players such as TAB2/3, NEMO and LUBAC are UBD-containing proteins whose ability to recognize ubiquitin chains is at the heart of their functions.

    —

    9. In Vivo Relevance of Ubiquitin-Dependent NF-kB Processes
    NF-kB-related ubiquitination/ubiquitin recognition processes described above at the protein level, regulate many important cellular/organismal functions impacting on human health. Indeed, several inherited pathologies recently identified are due to mutations on proteins involved in NF-kB signaling that impair ubiquitin-related processes [305]. Not surprisingly, given the close relationship existing between NF-kB and receptors participating in innate and acquired immunity, these diseases are associated with immunodeficiency and/or deregulated inflammation.

    10. Conclusions
    Over the last fifteen years a wealth of studies has confirmed the critical function of ubiquitin in regulating essential processes such as signal transduction, DNA transcription, endocytosis or cell cycle. Focusing on the ubiquitin-dependent mechanisms of signal regulation and regulation of NF-kB pathways, as done here, illustrates the amazing versatility of ubiquitination in controlling the fate of protein, building of macromolecular protein complexes and fine-tuning regulation of signal transmission. All these molecular events are dependent on the existence of an intricate ubiquitin code; that allows the scanning and proper translation of the various status of a given protein;. Actually, this covalent addition of a polypeptide to a protein, a reaction that may seem to be a particularly energy consuming process, allows a crucial degree of flexibility and the occurrence of almost unlimited new layers of regulation. This latter point is particularly evident with ubiquitination/deubiquitination events regulating the fate and activity of primary targets often modulated themselves by ubiquitination/deubiquitination events regulating the fate and activity of ubiquitination effectors and so on.

    —

    To the best of our knowledge the amazingly broad and intricate dependency of NF-kB signaling on ubiquitin has not been observed in any other major signaling pathways. It remains to be seen whether this is a unique property of the NF-kB signaling pathway or only due to a lack of exhaustive characterization of players involved in those other pathways.
    Finally, supporting the crucial function of ubiquitin-related processes in NF-kB signaling is their strong evolutionary conservation.

    Emphasis mine.

    The whole paper is amazingly full of fascinating information. I highly recommend it to all, and especially to those who have expressed doubts and simplistic judgments about the intricacy and specificity of the ubiquitin system, in particular the E3 ligases.

    But what’s the point? They will never change their mind.

  66. 66
    tribune7 says:

    GP, I love your posts and you make great points.

    I have long concluded that the opposition to ID is not based on science and reason but extreme emotion.

  67. 67
    ET says:

    uncommon avles- It isn’t just the right proteins. You need them in the correct concentrations, at the right time and gathered at the right place.

    The assembly of any flagellum is also IC. Then there is command and control without which the newly evolved flagellum is useless.

  68. 68
    bill cole says:

    gpuccio

    A comment from Rumraket that adds value.

    Rumraket
    April 16, 2018 at 6:49 pm
    Ignored
    The alpha and beta subunits from F-type ATP synthases that gpuccio is obsessing about belong to a big family of haxameric helicases.

    They are WILDLY divergent in sequence over the diversity of life, and many of them are involved in other processes and functions that have nothing to do with ATP synthase/ATPase. Besides the structural similarities, they all seem to be involved in many different forms of DNA or RNA nucleotide/ribonucleotide processing (such as unwinding of double stranded DNA or RNA), of which NTP hydrolysis or synthesis as observed in ATP synthase, is just one among these many different functions.

    So not only are they divergent in sequence in ATP synthase machines, versions of the structure is part of many other functions besides ATP hydrolysis and synthesis. Which evolved from which, or do they all derive from a common ancestral function different from any present one? We don’t know. But we know that both the sequence and functional space of hexameric helicases goes well beyond the ATP synthase machinery.

    Their capacity to function as an RNA helicase could be hinting at an RNA world role.

  69. 69
    john_a_designer says:

    Instead of using a wall of bricks as an analogy maybe we should think of Lego bricks or blocks.

    Is the following interconnected set of Lego blocks the result of chance or intelligence? Could it be a result of pure chance or just “dumb luck?” Does anyone have any doubt?

    https://cdn.frugalfun4boys.com/wp-content/uploads/2015/01/Simple-Legos-27-Edited.jpg

    Think of amino acids as a set of just twenty supercharged Lego blocks out of which you can build really functioning motors, power generators, assembly robots and data processing systems (to name just a few of the functions find going on inside a living cell.)

  70. 70
    gpuccio says:

    bill cole:

    I don’t understand. Is Rumracket quoting something, or just saying things that he thinks? Just to know.

  71. 71
    bill cole says:

    gpuccio

    here is a paper that is relevant to Rumraket’s discussion.
    Review
    Structure and function of the AAA+ nucleotide binding pocket?
    Author links open overlay panelPetraWendlerSusanneCiniawskyMalteKockSebastianKube
    Show more
    https://doi.org/10.1016/j.bbamcr.2011.06.014

  72. 72
    bill cole says:

    gpuccio
    Here is an additional link from Rum.
    Here’s a nice link: InterPro Homologous Superfamily:
    P-loop containing nucleoside triphosphate hydrolase (IPR027417)

  73. 73
    bill cole says:

    gpuccio

    He is pointing out that there are many uses of ATP synthase type proteins other then ATP production. I am not sure this challenges your argument however it is interesting information that helped me understand the broad uses of these motor proteins and how they are related.

    He maybe trying to make that case for lots of function although once you have identified 500 bits of information the “lots of function” argument becomes challenging as you pointed out in this op.

  74. 74
    gpuccio says:

    gpuccio:

    DNA_Jock is shouting, after all. Over the fence.

    Again, re-reading the past in his own way.

    He is right on one thing, however: in our past discouse (withnhim and REC) that he quotes now, I had accepted as true REC’s statement that ATP synthase in Alveolata was divergetnt. At the time, I did not check his statement in detail, a statemtn that was assumed by DNA_Jock too, so much so that he has been using it again in his recent comments about the ubiquitin thread.

    That is my sin, it seems: to have accepted a statmente apparently against my argument, as made by two opponents. Two opponents who, while making that statement and using it “against” me, has not even checked if their argument was correct. But of course, the sin is mine, not theirs.

    Well, the argument is simply wrong. I have discovered that simple fact readin the paper linked by REC (at that time) and by DNA_Jock (now), apparently without having read it.

    The alpha and beyta chains, the only proteins that I have used in my argument, are not among those that are mentioned as “divergent” in thir quoted paper.

    That’s what the authors of their paper say, as I have quoted. My paiwise comparison is something that I did additionally, just to be sure, this time, that what was being said is true.

    But against, it is my sin, not theirs.

    As it is pribably my sin that DNA_Jock has simply ignored this obvious and proven facts, that Entropy has argued agaiins about the inaginary difference in Alvelolata chains that would prove my argument wrong, and so on.

    Always my sin, of course.

    But DNA_Jock is not satisfied with that. Ge raises again (or simply copies) a different objection, that has nothing to do with the previous one, but you know, it’s always better to add up anything possible, when we are wrong!

    So, he states again that the chains are not really so conserved, because I should have aligned hnfreads of proteins, and not three.

    Demonstrating so that he has not understood at all why I use that methodology.

    Well, I have answered that objection at comment #256 in the English language thread, where most of this old discussion took place:

    https://uncommondescent.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/

    I could answer it again now, and with more details, but why? Let’s the past re-read itself.

    He says:

    “We are not. We are having a discussion about the tendency of humans to apply paint in a TSS manner, because they believe that this is the only way it could be.”

    Well, he is the only one doing that, as far as I can see. I am not certainly doing that, because the case of what I would call “the TSS fallacy fallacy” is closed, for me. Simply stating the same wring things is not a discussion, and DNA_Jock is simply doing that (indeed, he does not even re-state them: he just point to his old comments).

    Just to conclude: it is absolutely true that the alpha and beta chians of ATP synthase in Alveolata have nothing specail: they are highly conserved, like in all other organisms.

    And DNA_Jock does not even have the fairness to admit it.

  75. 75
    gpuccio says:

    bill cole:

    Rumracket is, again, making a false argument. He has not even understood what I am talking about.

    I have blasted the beta chain of human ATP sinthase against all human proteins.

    Of course, it has 1061 bits of homology with itself (identity).

    Let’s remember that it also has 663 bits of homology with the same protein in e. coli.

    Well, do you know how much is its homology with any other human protein, including all those “related” proteins in other kinds af ATPases that Rumrock mentions? In humans?

    The highest hit is 157 bits, folowed by a 148 bits, then a group of 115 bits values.

    It has, as already said, 94.7 bits homology with its sister protein, the alpha chain.

    IOWs. all these proteins are somewhat related, and they share about 100+ bits of homology among themselves.

    But the beta chain shares 663 bits of homology with its specific bacterial counterpart, 506 bits more than what it shares with its nearest homologue in the human proteome.

    Therefore, the beta chain (and the alpha too) are specific proteins, different from all the others mentioned by Rumracket. They are only found in ATP synthase, both the classical form and the Alveolata variant, and they are always extremely conserved.

    Rumracket simply does not understand the argument, and the role of sequence conservation in it. He is denying a functional specificity which should be obvious to anyone.

  76. 76
    bill cole says:

    gpuccio

    Demonstrating so that he has not understood at all why I use that methodology.

    This is the point. If he does not address your methodology of calculating information bits, he is failing to address your argument.

    He accuses of you putting a target around your bullets but in reality he has been trying to add bullets to your wall.

    You have not cherry picked any bullets which is what the TSS is all about.

    By adding bullets to the wall he is creating a STRAW-MAN argument.

    He has committed a straw-man fallacy which is a very common tactic used against ID.

    The burden is on Jock to show that your 500 bit calculation is wrong. Since this is difficult he was instead trying to discount your argument as a fallacy and ultimately he failed as he had to use commit a logical fallacy in order to challenge your argument.

  77. 77
    bill cole says:

    gpuccio

    Rumraket simply does not understand the argument, and the role of sequence conservation in it. He is denying a functional specificity which should be obvious to anyone.

    Agree

    He is struggling to directly defeat the argument so he is trying to set up a STRAW-MAN probably not realizing he is doing this.

    Your argument is using ATP synthase alpha and beta. He is trying to make it about super families like AAA+ thus changing your argument.

    Like Jock he is adding bullets to the wall. We can name this the TSSM or the Texas sharpshooter straw-man 🙂

  78. 78
    Origenes says:

    If, post hoc, a specification can be based on other observed properties than the outcome, then we have a valid post-specification.

    “Do you see yonder cloud that’s almost in shape of a camel?”

    Suppose that we inspect the wall, after it has been shot 100 times, and discern that the shots form a text-pattern (e.g. ‘Do you see yonder cloud that’s almost in shape of a camel?’). Then this observation would offer us a basis for a specification independent from the outcome.

    Or does it? This is a crucial question: is this text-pattern based on the outcome?

    The answer is a resounding “No.” Because the outcome is to be regarded as a collection of distinct results from separate random shots.

  79. 79
    gpuccio says:

    bill cole:

    There is something really strange in their logic.

    Follow me.

    The beta chain in Alveolata would be, in their opinion, a clear example of an independent solution, of some unrelated peak that evolution did find, because, of course, there are so many of them!

    But it has 757 bits of homology and 65% identity with the human sequence, after a separazion of maybe more than 1 billion years. Which is practically no kdivergence at all. But they are not interested in that simple fact, they don’t even acknowledge it after it has been explicitly put under their closed eyes.

    On the other hand, Rumracket’s distant parent proteins, with little more than 100 bits of homology and a maximum of 21.5% identity in the same species (homo sapiens), are in their opinion a demonstration that the protein is ubiquitous but can diverge in sequence!

    Am I missing something?

    Has Rumracket even thought that the reason why the beta chain of ATP synthase and those different chains from other ATPases are so divergent is simply that they are different proteins that do different things, even if with some common basic plan?

    Or that the reason why the beta chain of humans and the beta chain of Alveolata and the beat chain of bacteria are so strikingly similar, even after billions of years of separation, is simply that they have the same function, and that a very high specific information in bits is required for that specific function?

    But probably I am asking too much of him!

  80. 80
    gpuccio says:

    Origenes:

    Correct!

    But even just shooting the green bricks would be sufficient. 🙂

    You are really asking a lot of our shooter! 🙂

  81. 81
    gpuccio says:

    bill cole:

    One more interesting fact.

    I have taken one of the best characterized human ATPase subunits that have some hit with the human form of beta chain of ATP synthase:

    ATPase, H+ transporting, lysosomal 56/58kDa, V1 subunit B1 (Renal tubular acidosis with deafness), isoform CRA_b [Homo sapiens]

    Length 471 AAs

    Homology with the human beta chain: 115 bits.

    I have blasted its sequence against proteobacteria, and the best hit is 549 bits and 59% identity.

    IOWs, this different ATPase is very conserved too, from bacteria to humans.

    IOWs, the “beta chain of ATP synthase” and the “ATPase, H+ transporting, lysosomal 56/58kDa, V1 subunit B1” are two different proteins with some basic homology and different functions, and each of them is highly conserved from bacteria to humans.

    So, we have here two different but individually conserved sequences.

    The simple truth is that sequence is related to function, and to specific function, something that both Rumracket and DNA_Jock try desperately to deny or obfuscate.

  82. 82
    ET says:

    Specification isn’t too complicated for those not on an agenda:

    Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. In virtue of their function, these systems embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the same sense required by the complexity-specification criterion (see sections 1.3 and 2.5). The specification of organisms can be crashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of minimal function of biochemical systems.- Wm. Dembski page 148 of NFL

    In the paper “The origin of biological information and the higher taxonomic categories”, Stephen C. Meyer wrote:

    Dembski (2002) has used the term “complex specified information” (CSI) as a synonym for “specified complexity” to help distinguish functional biological information from mere Shannon information–that is, specified complexity from mere complexity. This review will use this term as well.

    Biological functionality is specified information.

  83. 83
    gpuccio says:

    ET:

    “Biological functionality is specified information.”

    Of course it is.

  84. 84
    Nonlin.org says:

    gp@2

    Design can in a sense be considered a “law”: in the sense that it connects subjective representations and subjective experiences to an outer result.

    However, design is not a law in the sense of being a predictable regularity. Its cognitive aspect is based on understanding of meanings, including laws, but its intentional part is certainly more unpredictable.

    Just “in a sense”? Assuming you don’t see the creator in action, how would you know something was designed other than by observing it follows certain laws? What do you mean: “intentional part is certainly more unpredictable”?

    Random configurations do exist in reality, and the only way we can describe them is through probabilistic models.

    What do you mean? I would say there is no pure randomness in nature – there’s always a mix of random and nonrandom (law/design) – say you have a black box that emits radioactive decay particles – if you observe these outputs long enough you can be quite certain (probabilistic) about the element inside the box – that is the deterministic component of that experiment. Because of the inherent mix of randomness and design, you don’t need to prove every single little thing is designed – just showing an element of design makes the whole thing designed. But again, we do not observe the designer at work – we only see the laws followed by the biologic systems.

    A regular six-face dice has zero probability of an outcome higher than six because the system has been so designed. It doesn’t matter if the determinism comes before the random event (as discussed) or after as in a twelve-face dice that is rolled again by an agent that seeks only one to six outcomes. In biology, if non-random “natural selection” acts upon “random mutations”, then the outcome is non-random – hence designed – as we observe in nature. There should be no doubt that Someone designs and builds the dice, Someone designs and builds the random generator (not easy), and Someone designs and runs the whole experiment.

    Finally, let’s say that if it looks desinged, we must certainly seriously consider that is could really be designed.

    But there are cases of things that look designed and are not designed. Therefore, we need rules to decide in individual cases, and ID theory is about those rules.

    Example? Based on outcome, randomness is impossible to determine (it’s simple math) http://nonlin.org/random-abuse/ . On the other hand, even a 10-bit sequence has a mere o.1% probability, so probabilities get extreme very quickly. Probabilities of randomness in biology are ridiculously low, therefore not even worth seriously discussing.

  85. 85
    ET says:

    gpuccio-

    The “argument” against Biological functionality is specified information is that we only know about the functionality because we observe it after the fact (as if science isn’t done via observation and trying to figure out what we are observing). Seriously.

  86. 86
    bill cole says:

    gpuccio
    Has Rumracket even thought that the reason why the beta chain of ATP synthase and those different chains from other ATPases are so divergent is simply that they are different proteins that do different things, even if with some common basic plan?

    I think that Rum is just coming up to speed on the argument. His opinion will move as he comes up to speed.

    Jock has a very committed position to TSS. This is nothing more then trying to say your method of determining functional information in the form of bits is bogus.

    I went through the old arguments the best I could to try and figure out how he got so dug in the the TSS straw-man. It turns out REC started it by expanding your ATP synthase beta and alpha to ATP function in general. That was the straw-man that Jock pivoted on to bring up the TSS. You made some convincing arguments but Jock had become irrationally committed.

    At the end of the day REC was saying you could not measure functional information because the functional space/sequence space ratio is unknowable.

    We are getting closer to understanding this as new data surfaces. The Hayashi paper is strong support for your position. I think it may turn out that your conserved sequence test is quite workable.

    I think that Jock’s tactic is to try to win an argument by creating confusion and trying to appear to be the authority. What do you think?

  87. 87
    uncommon_avles says:

    gpuccio @ 58
    I have no hesitation in acknowledging that TSS fallacy does not apply to the bricks analogy if you assume the green was painted and not due to moss or the bullets were not smart bullets which were seeking green colour targets.
    However, how can you be sure that TSS fallacy does not apply to biological system, because we have no way of knowing if the structure you are looking at has evolved from other structures or due to processes (akin to ‘natural’ smart bullet) which has yet to be discovered? Unless we exhaust all reasoning we can’t come to a conclusion.
    A doctor looks at symptoms and prescribes medicine. If he is unable to identify any specific bacteria causing the disease, he prescribes a broad spectrum antibiotic – he doesn’t say the disease is due to unnatural phenomenon or due to patients past sin, because he hasn’t studied and eliminated all the possible reasons for the cause of the disease.

  88. 88
    kairosfocus says:

    GP, very well argued, a case of a sledgehammer vs a peanut in the shell. It is sadly revealing that so many find it so hard to recognise that once a haystack is big enough and

    (a) needles are deeply isolated, with

    (b) tightly limited resources that constrain search to a negligible fraction of the space of possibilities

    then, there is no plausible blind solution to the island of function discovery challenge.

    At 500 – 1,000 bits for a space of possibilities, we are looking at 3.27*10^150 to 1.07*10^301 possibilities. With an observable cosmos [that’s the border between science and metaphysical speculation] of ~ 10^80 atoms and ~ 10^17 s with fast reaction rates ~ 10^12 – 15/s, sol system and cosmos scale searches fall to negligible proportions. And if we are talking earth’s biosphere, such kicks in much earlier.

    Where, relevant functionality is configuration based [thus, a binary description language based on structured Y/N q’s is WLOG] and contextually dependent. If you doubt the latter, I just had a case in Ja where a US$ 200+ MAF — a SECOND time — with the alleged right part number i/l/o the year and model, was not right.

    Such functionality is also separately observable and recognisable as configuration-dependent. For instance, we can perturb and see loss of function [hence, rugged landscape issues].

    Not that mere facts and logic will suffice for those determined not to see the cogency of a point. Comparative: in the 1920’s the Bolsheviks set out on central planning. Mises highlighted the breakdown of ability to value and the resulting incoherence of excessively centralised planning, almost instantly. Sixty years and coming on 100 million lives later it collapsed.

    A sobering lesson.

    KF

  89. 89
    gpuccio says:

    kairosfocus:

    Thank you for the great contribution in few words! 🙂

    You are perfectly right: the important point is not the absence of other needles (that in principle cannot be excluded, and in many cases can be proved), but the fact that they are still needles in a haystack.

    IOWs, the existence of alternative complex solutions does not have any relevant effect on the computation of the improbability of one individual needle. It’s the functional specificity of each individual needle that counts.

    That’s what I have tried to argue with my discourse about time measuring devices. Evoking only ridiculous answers from DNA_Jock, who probably really believes that the existence of water clocks and candle clocks makes the design inference for a watch a TSS fallacy! Indeed, he seems so certain that we are “painting” the function of measuring time around the random object that is our watch!

    Any solution that is highly specific is designed. We have absolutely no counter-examples in the whole known universe.

    The TSS fallacy is often invoked (correctly) in scientific reasoning and statistical analysis in cases where a false clustering is inferred without a correct probabilistic analysis.

    IOWs, TSS fallacies are an example of seeing forms in clouds, like in the classic “Methinks it’s like a weasel” situation. All of us have seen forms in clouds, but of course we don’t make a scientific argument to say that they are designed.

    The error is in giving meaning to clusters that, given the probabilistic resources of the system we are observing, are probably only random configurations.

    But that does not mean that all clusterings are wrong. If we observe a very strong clustering, with a p-value lower, for example, than 10^-16, we can be rather confident that a real cluster is there. And if there are two more clusters, equally significant, that does not mean that we are committing the TSS fallacy, as DNA_Jock seems to believe: it just means that there are really three significant clusters, and that all of them need explanation.

    And, of course, we can always change our definition of the cluster, making it more specific or less specific, IOWs tracing bigger or smaller circles around the concentration of data.

    What happens if we do that?

    Of course, if we trace a circle that is too big, we dilute the observed effect. No scientist with sense would do that.

    And if we trace a circle that is too small, we loose significant data: the statistic significance becomes lower. No scientist with sense would do that.

    So, what do scientistists with sense do, all the time?

    They trace the circle that fits the data best, and they compute the upper tail for the observed effect in the probability distribution that describes the system as a random system. And if the probability of the upper tail ir really small, they reject the null hypothesis and consider the cluster a real cluster, which needs to be explained.

    But according to DNA_Jock, nothing of that is correct: it’s all TSS fallacy, because we can trace bigger or smaller circles, so of course we are painting false targets!

    And I suppose that, in his opinion, Principal Components Analysis and all forms of unsupervised learning are completely useless procedures, TSS fallacies them too.

  90. 90
    ET says:

    Bill et al,

    Rumracket always falls back to the “right” mutations. Mutations that we have no idea what they were.

    For examples voles didn’t get the right mutations and remained voles. But the “right” mutations would have transformed them into something other than voles.

    And so it is with proteins and protein machines. And that is their “argument”- all the while ignoring the two mutation issue

  91. 91
    gpuccio says:

    bill cole:

    “I think that Jock’s tactic is to try to win an argument by creating confusion and trying to appear to be the authority. What do you think?”

    The same thing.

    DNA_Jock is intelligent and competent, but he is also arrogant and obsessed by his own ideas, and he cannot accept that he is wrong even when he is obviously wrong.

    That’s not good.

  92. 92
    gpuccio says:

    uncommon_avles:

    I have no hesitation in acknowledging that TSS fallacy does not apply to the bricks analogy if you assume the green was painted and not due to moss or the bullets were not smart bullets which were seeking green colour targets.

    Good.

    And smart bullets would be designed, I suppose.

    However, how can you be sure that TSS fallacy does not apply to biological system, because we have no way of knowing if the structure you are looking at has evolved from other structures or due to processes (akin to ‘natural’ smart bullet) which has yet to be discovered?

    Because the targets are real targets, therefore no TSS apllies. We can compute the probability of finding real targets in a real random system. That’s not TSS.

    Other explanations, which are no trandom but imply some role of necessity, can be considered, of course, in the measure that they are avalable and reasonable. That’s is part of ID too (see Dembski’s explanatory filter). But computing the probability of finding a target by chance is necessary to exclude a random origin.

    You can find a wide discussion of the role and limitations of NS here:

    What are the limits of Natural Selection? An interesting open discussion with Gordon Davisson

    https://uncommondescent.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/

    They don’t include “natural smart bullets”, of course! 🙂

    Unless we exhaust all reasoning we can’t come to a conclusion.

    This is simply wrong. Science is made by reasoning about what we know, the facts, and not about “things that have yet to be discovered”, and that are not supported by any known fact.

    A doctor looks at symptoms and prescribes medicine. If he is unable to identify any specific bacteria causing the disease, he prescribes a broad spectrum antibiotic – he doesn’t say the disease is due to unnatural phenomenon or due to patients past sin, because he hasn’t studied and eliminated all the possible reasons for the cause of the disease.

    That’s exactly my point, not yours.

    The doctor gives the most reasonable explanation (an infectious cause), even if he cannot identify the exact etiology in that case.

    But he does not consider other “possible reasons” like past sins or whatever, because there is no fact that suggest that those imaginary explanations have any merit in this case.

    The problem is not that they are “unnatural” (past sins can certainly cause diseases!). The problem is that they have no explanatory power in the scenario we are observing.

    If the observed disease were liver insufficiency, he would certainly consider the “past sin” of voluntarily drinking too much as a possible cause, with good explanatory power.

  93. 93
    bill cole says:

    gpuccio

    Here is a post from Rum at TSZ.

    colewd: If an amino acid change causes purifying selection then that amino acid is important to the organisms function.

    , that isn’t a subject of contention here.

    But “It is under purifying selection” =/= “It exists nowhere else in sequence space”. For all we know, there could be a hill with a similar function in the immediate vicinity, with a narrow but deep fitness valley (wrt to ATPase/ATP synthase function= separating the other hill from the existing one. You can’t actually know that this isn’t the case without empirically exploring that surrounding space.

    The detection of purifying selection for a particular function can at most indicate that there is a hill surrounded by fitness valleys for that function. It does not say ANYTHING about the density by which hills with similar functions exist in sequence space.

    But interestingly, what we know from the existance of the P-loop NTPase superfamily is that, while it might be the case that the ATPase/ATP synthase function is surrounded by a fitness valley, it is also surrounded by hills constituting other functions.

  94. 94
    Origenes says:

    GPuccio @75

    DNA-Jock does not distinguish between a definition of ATP synthase and a specification of ATP synthase. The latter would be about the function of ATP synthase, while the first would be among other things about its sequences.

    IOWs the definition of ATP synthase is closely related to what can be called “the outcome” or the result. So, if one uses the definition of ATP synthase as the specification, then one commits the TSS fallacy.
    DNA-Jock fails to make this distinction. So, whenever you discuss the definition of ATP synthase he immediately thinks that you paint fresh bullseyes.

    This is an obstacle for the discussion.

    Here you explain the difference between definition and specification very clearly:

    GP: Let’s go to proteins. If I look at the protein and I say: well, my specification is: a 100 AAs protein with the following sequence: …

    For clarity: this could be termed a definition of the protein.

    GP: … then I am painting a target, because I am using a sequence that has already come out. That is not correct, and I am committing the TSS fallacy.

    Indeed. And this is what DNA-Jock thinks is happening when you discuss the definition of ATP synthase. He does not understand that the definition of ATP synthase is not its specification.

    Please explain to DNA-Jock how you make a specification.

    GP:… if I see that the protein is a very efficient enzyme for a specific biochemical reaction, and using that observation only, and not the specific sequence of the protein (and I can be completely ignorant of it), I define my function as:

    “a protein that can implement the following enzymatic reaction” (observed function) at at least the following level (upper tail based on the observed function efficiency)” …

    Aha! So, that is a specification! Did you get that DNA-Jock?

    GP:… then my post-specification is completely valid. I am not committing any TSS fallacy. My target is a real target, and my probabilities are real probabilites.

  95. 95
    gpuccio says:

    bill cole:

    So, after the infamous deck of cards fallacy, Rumracket has now taken the old path of “all could be possible”. Not a new choice, that too.

    But the problem for him is that what is under extremely strong purifying selection is a whole complex sequence, not one aminoacid.

    IOWs, the sequences of the alpha and beta chains of the F1 subunit of ATP syntase are finely crafted, to realize a functional unit which is very much similar to a highly specialized watch. And, while some basic structure is shared with other types of ATPases, which, I believe, is Rumracket’s argument, the fine definition of the two sequences, that implies 500+ additional bits for the beta sequence and a little less for the alpha, almost 1000 additional bits for the whole structure, is specific of those two chains.

    That is the functional island of which I am talking, not the 100 bits that are shared between many ATPases of different function and context.

    That functional island is very specific, as shown by the extremely high conservation of most of its AAs.

    Therefore, his imaginary ideas about fitness valleys make no sense at all. The sequences are surrounded by vast deserts, at least for what regards the specific information that makes them what they are.

    The fact (true) that part of the basic information is shared with other islands does not help at all. It’s the new information that has to be built, not the 100 bits that are already present in other proteins.

    It’s the concept of the complexity of a transition, which seems so difficult to understand for our interlocutors.

    Of course, also the basic information that is shared is rather complex, and needs explanation: but that part is probably older, and has a different evolutionary history: it is specific for the class of ATPases, but it is only a small part of the specific information in the alpha and beta chains of ATP synthase. A very small part.

    Instead, look at the extremely consistent conservation of the alpha and beta chains between bacteria and humans, that demonstrates how constrained these sequences are. And how specifically different from the corresponding sequences in other types of ATPases!

    Of course, Rumracket can describe all kinds of imaginary landscapes: who can contradict pure imagination? He is writing fairy tales, and realism has never been the best inspiration for that kind of things.

  96. 96
    gpuccio says:

    Origenes:

    So, I thought that was clear in what I have written.

    I must say that I don’t use the word “definition” in that sense. That is probably DNA_Jock’s equivocation. I think that I have used the terms definition and specification as synonims, the only difference being that we first give a definition and then use it as a specification. I always say that we have to recognize and define the function, and that the explicit definition of the function, including its observed level, becomes the specification to measure functional complexity for that function. So, in a sense, a specification is only a definition used to measure functional information (IOWs, to generate a binary partition in the search space).

    So, when I have said that some type of definitions cannot be used without incurring in the TSS fallacy, I have clarified that I meant definitions that re-used the contingent information derived from the event: for example, the specific sequence of AAs.

    If you want, we can call that “a contingent post-definition”, for clarity.

    So, for me, all definitions are definitions: i see no reason to accept DNA-Jock’s equivocal terms.

    Some definitions are good specifications:

    a) All pre-definitions, either they are conceptual or contingent

    b) Post-definitions, only if they respect the two requirements I have given for a valid post-specification:

    b1) they must be based on some objective property of the system

    b2) they must not use in any way the contingent information in the outcome to build the definition

    Other definitions are not good specifications, and invariably generate a TSS fallacy:

    c) All post-definitions which are based on the contingent information in the outcome.

    I hope that’s clear (for you, I don’t think it wil ever be clear for DNA_Jock).

  97. 97
    bill cole says:

    gpuccio

    Of course, Rumracket can describe all kinds of imaginary landscapes: who can contradict pure imagination? He is writing fairy tales, and realism has never been the best inspiration for that kind of things.

    How do think Rum’s argument would relate to U2? Some proteins amino acid change will stop it from binding to other proteins. In this case the hill climbing is irrelevant. Does it bind or doesn’t it? If it doesn’t bind then the function fails. There is no natural selection here only survival or death. Thinking about proteins as only single enzymes is a fallacy. The key here is the alpha and beta chains must interact to function correctly. There is no hill to climb. If they fail to interact the animal dies.

    The evolutionists have created a “just so” story to try and save the concept that proteins can evolve. I don’t think real biology supports that story.

    The Hayashi paper supports your hypothesis but it is not a real simulation of evolution as it only simulates single cell organisms and simple enzyme reactions. ATP synthase is a very different story as it involves 13 proteins that must bind together to support a single function. It produces ATP in production which is mission critical for life.

    Is life possible without ATP synthase? If not this is an original sequence. There is no natural selection event that can help build an original sequence. Natural selection requires cell division to initiate it.

    Your right, if there is no grounding in science an evolutionists can make anything true by pure speculation.

    How confident are you that your bit calculation is equivalent to the probability of forming ATP synthase by random chance?

  98. 98
    gpuccio says:

    bill cole:

    “How confident are you that your bit calculation is equivalent to the probability of forming ATP synthase by random chance?”

    Very confident indeed! 🙂

    Of course, it is an indirect and approximate measure, so “equivalent” does not mean “exactly the same thing”.

    But it is a very good way of measuring functional information indirectly, given that we cannot realistically measure it directly because of obvious combinatorial limitations.

  99. 99
    gpuccio says:

    bill cole:

    “The evolutionists have created a “just so” story to try and save the concept that proteins can evolve. I don’t think real biology supports that story.”

    It doesn’t.

  100. 100
    gpuccio says:

    bill cole:

    “Thinking about proteins as only single enzymes is a fallacy. The key here is the alpha and beta chains must interact to function correctly. There is no hill to climb. If they fail to interact the animal dies.”

    It’s much worse than that (for neo-darwinists, of course, or in a sense for the animal too 🙂 ).

    The alpha and beta chain must interact finely one with the other to buind the final functional unit of the F1 subunit: the hexamer which, indeed, binds ADP and phosphate and generates ATP. IOWs, the catalytic machine.

    But that’s not enough. The alpha and beta hexamer must undergo a series of conformational changes, which are the essence of its catalytic function, because it’s those changes that provide the necessary energy that will be “frozen” in the high energy molecule of ATP.

    Those changes are generated by the rotor, essentially the stalk (the gamma chain) linked to the c-ring, and the c-ring rotates because of the energy derived form the proton gradient (the “water” in the mill).

    But, of course, the alpha-beta hexamer must be anchored, so that it does not rotate together with the stalk (the gamma chain), but is instead deformed by its rotation, undergoing the needed conformational changes.

    So, the hexamer must be “anchored” to the F0 subunit, and that is implemented by the “peripheral stalk”, the a and b chains.

    So, our two chain (alpha and beta) must not only interact finely one with the other so that they can build the complex structure that can undergo the three conformational changes necessary for the catalysis; they must also, in their hexameric form, interact correctly with the stalk (the gamma chain), and with the peripheral stalk (the b chains).

    This is of course a very sophisticated plan for a very sophisticated machine.

    That explains the high functional specificity of our two sequences.

    This is from the Wikipedia page:

    Binding model

    Mechanism of ATP synthase. ADP and Pi (pink) shown being combined into ATP (red), and the rotating ? (gamma) subunit in black causing conformation.

    Depiction of ATP synthase using the chemiosmotic proton gradient to power ATP synthesis through oxidative phosphorylation.
    In the 1960s through the 1970s, Paul Boyer, a UCLA Professor, developed the binding change, or flip-flop, mechanism theory, which postulated that ATP synthesis is dependent on a conformational change in ATP synthase generated by rotation of the gamma subunit. The research group of John E. Walker, then at the MRC Laboratory of Molecular Biology in Cambridge, crystallized the F1 catalytic-domain of ATP synthase. The structure, at the time the largest asymmetric protein structure known, indicated that Boyer’s rotary-catalysis model was, in essence, correct. For elucidating this, Boyer and Walker shared half of the 1997 Nobel Prize in Chemistry.

    The crystal structure of the F1 showed alternating alpha and beta subunits (3 of each), arranged like segments of an orange around a rotating asymmetrical gamma subunit. According to the current model of ATP synthesis (known as the alternating catalytic model), the transmembrane potential created by (H+) proton cations supplied by the electron transport chain, drives the (H+) proton cations from the intermembrane space through the membrane via the FO region of ATP synthase. A portion of the FO (the ring of c-subunits) rotates as the protons pass through the membrane. The c-ring is tightly attached to the asymmetric central stalk (consisting primarily of the gamma subunit), causing it to rotate within the alpha3beta3 of F1 causing the 3 catalytic nucleotide binding sites to go through a series of conformational changes that lead to ATP synthesis. The major F1 subunits are prevented from rotating in sympathy with the central stalk rotor by a peripheral stalk that joins the alpha3beta3 to the non-rotating portion of FO. The structure of the intact ATP synthase is currently known at low-resolution from electron cryo-microscopy (cryo-EM) studies of the complex. The cryo-EM model of ATP synthase suggests that the peripheral stalk is a flexible structure that wraps around the complex as it joins F1 to FO. Under the right conditions, the enzyme reaction can also be carried out in reverse, with ATP hydrolysis driving proton pumping across the membrane.

    The binding change mechanism involves the active site of a ? subunit’s cycling between three states.[11] In the “loose” state, ADP and phosphate enter the active site; in the adjacent diagram, this is shown in pink. The enzyme then undergoes a change in shape and forces these molecules together, with the active site in the resulting “tight” state (shown in red) binding the newly produced ATP molecule with very high affinity. Finally, the active site cycles back to the open state (orange), releasing ATP and binding more ADP and phosphate, ready for the next cycle of ATP production.[12]

    And this is from the PDB “Molecule of the Month” page:

    ATP synthase is one of the wonders of the molecular world.

    And, of course, in the real world wonders are not cheap things. Highly refined functional information is the price for this wonder! 🙂

  101. 101
    uncommon_avles says:

    gpuccio @ 93
    Because the targets are real targets, therefore no TSS apllies. We can compute the probability of finding real targets in a real random system. That’s not TSS.
    There are no predefined targets in evolution. There is no plan to “shoot the green brick”

    This is simply wrong. Science is made by reasoning about what we know, the facts, and not about “things that have yet to be discovered”, and that are not supported by any known fact.
    Science progresses as we discover new facts. We change from Geocentric to heliocentric, from planetary model of atom to probabilistic model. Facts dictate science so a new discovery tomorrow might explain why a seemingly difficult biological process is not difficult at all.
    On the other hand, an ‘external agent’/ Intelligent entity capable of controlling entire universe has neither been discovered nor theorized ( as against imagined). It is not supported by any known fact.

    But he does not consider other “possible reasons” like past sins or whatever, because there is no fact that suggest that those imaginary explanations have any merit in this case.
    Then why does ID bring in imaginary explanations in biological processes?

  102. 102
    OLV says:

    (102)
    Is the confirmed detection of irreducible functional complexity in biological systems that gpuccio has referred to in numerous occasions factual or imaginary?
    If that’s imaginary, then gpuccio should seriously consider pursuing a very successful career as a fiction writer. His bestselling stories will fill the bookstores everywhere.

  103. 103
    Origenes says:

    uncommon_avles: On the other hand, an ‘external agent’/ Intelligent entity capable of controlling entire universe has neither been discovered nor theorized ( as against imagined). It is not supported by any known fact.

    And yet, contrary to your claim, here we are discussing supportive evidence for an intelligent designer.

  104. 104
    Origenes says:

    uncommon_avles @102

    UA: There are no predefined targets in evolution. There is no plan to “shoot the green brick”

    Support your claim. Show how it can all come about without a plan.

    UA: Facts dictate science so a new discovery tomorrow might explain why a seemingly difficult biological process is not difficult at all.

    Until that day of great discovery, the best scientific explanation is intelligent design.

    UA: On the other hand, an ‘external agent’/ Intelligent entity capable of controlling entire universe has neither been discovered nor theorized ( as against imagined). It is not supported by any known fact.

    Contrary to your claim there is an abundance of fine-tuning arguments. Why do you think that your side imagines a multiverse? Just for the heck of it?

    UA: Then why does ID bring in imaginary explanations in biological processes?

    Intelligent design is not an ‘imaginary explanation’. How else do you explain the existence of the computer you see in front of you?

  105. 105
    gpuccio says:

    bill cole:

    A quick summary of the extreme resources used by our neo.darwinist friends:

    – Rumracket tries the “infamous deck of cards fallacy”: unlikely events happen all the time, statistics is completely useless.

    – DNA_Jock sticks to the tSS fallacy: our statistics is a fallacy.

    – uncommon_avles tries some very trivial repetition of the essentials of scientism, reductionism, materialism and naturalism, all together, reciting them as evidence of his blind faith and adding some bold, just in case.

    Not so exciting…

    By the way, any news from Joe Felsestein about functional complexity? That would be probably more interesting. 🙂

  106. 106
    gpuccio says:

    uncommon_avles (and others):

    Just a brief note about the “no predefined targets in evolution” argument, another masterpiece of neo-darwinian thought.

    The idea is very simple:

    Of course there are no predefined targets in the neo-darwinian theory, except maybe reproductive advantage. But there are a lot of targets in the biological world. Targets that have been found, not targets that have to be found.

    Those targets are real, and neo-darwinian theory has the difficult (indeed, impossible) task of explaining how is it possible that so many extremely sophisticated functional targets have been found by a supposed mechanism that has no targets at all.

  107. 107
    gpuccio says:

    OLV at #103:

    It is factual.

    There goes my career as a fiction writer.

  108. 108
    Origenes says:

    GPuccio @97

    Thank you for clarifying your use of terms. For the likes of DNA-Jock it would be helpful if we have one simple round easy-to-remember word for a “contingent post-definition” and another for a “non-contingent post-definition.” I fear that my proposal “specification based on the outcome” and “independent specification” only made matters worse for them.

    Poor bastards 🙂

  109. 109
    tribune7 says:

    uncommon_avles

    There is no plan to “shoot the green brick”

    So that it is just the green bricks that get shot is mere coincidence despite the extreme improbability?

    Then why does ID bring in imaginary explanations in biological processes?

    What ID does is make observations and suggest an explanation for it. Nothing imaginary about it.

  110. 110
    Origenes says:

    Rumraket at TSZ @

    Mung: Why don’t people just get brutally honest with gpuccio and let him know that no matter how improbable something may be, that just doesn’t matter.

    Rumraket: It does matter, but the problem is Gpuccio is leaving out the probability of design.
    A design hypothesis has to explain why X happened as opposed to Y, and give a probability of X on design.
    That whole thing is simply skipped by design proponents. The calculation is attempted for a “blind material process”(and for some reason it’s always assumed to be like a tornado in a junk yard), but no calculation is attempted for design.

    We see this argument in various forms at the TKZ. Paraphrasing:

    The existence of computers cannot be explained by chance? Well, computers can also not be explained by intelligent design! Take that UD! Tit for tat.

    Did no one tell them that intelligent design is not a random, but, instead, teleological process? There is no way to calculate the probability of intelligent design. Here is a meaningless question: what is the probability of Leonardo Da Vinci painting the Mona Lisa?
    That is, it is a meaningless question unless one proposes a random mechanism which creates paintings.
    What is the probability that this post contains an argument against a post made by Rumraket at TSZ? Nonsense question, unless one proposes that this post is created by e.g. a monkey banging away on a typewriter producing forum posts — in which case the probability is very low.
    I do hope no one seriously considers this to be a possibility 🙂

  111. 111
    gpuccio says:

    Nonlin.org at #85:

    I am afraid that I don’t really understand what you think, and your arguments. I suppose that we use some basic concepts in a very different way.

    I will just answer what I think I understand:

    Just “in a sense”? Assuming you don’t see the creator in action, how would you know something was designed other than by observing it follows certain laws? What do you mean: “intentional part is certainly more unpredictable”?

    My point is that design is not a law based on regularities. The only regularity is that complex functional indformation points to a designer. And even that is an inference, not a law.

    The content of design in unpredictable, because it depends on the desires and cognitive abilities of the designer. No laws here, too.

    I would say there is no pure randomness in nature – there’s always a mix of random and nonrandom (law/design) – say you have a black box that emits radioactive decay particles – if you observe these outputs long enough you can be quite certain (probabilistic) about the element inside the box – that is the deterministic component of that experiment.

    Great confusion here!

    We must distinguish between usul randomness and quantum randomness. You cannot conflate the two.

    Usual randomness just means that there is some system whose evolution is completely deterministic, but we can’t really describe its evolution in terms of necessity, because there are too many variables, or we simply don’t know everything that is implied.

    In some cases, such a system can be descrbed with some success using an appropriate probability function. Probability functio0ns are well defined mathematical objects, which can be useful in describing some real systems.

    A probabilistic description is certainly less precise then a necessity description, but when the second is not available, the first is the best we can do.

    A lot of empirical science uses succesfully probabilistic tools.

    So, we describe deterministic systems in terms of necessity or probability according to what we can do. The configurations a the molecules in a gas are better studied by probabilistic tool, they cannot be computed in detail. But they remain deterministic just the same.

    Quantum probability is all another matter, and a very controversial issue. It could be intrinsic probability, but not all agree.

    Radioactive decay is a quantum event, and therefore has the properties of quantum probability. What is probabilistic is the time of the decay event.

    Because of the inherent mix of randomness and design, you don’t need to prove every single little thing is designed – just showing an element of design makes the whole thing designed. But again, we do not observe the designer at work – we only see the laws followed by the biologic systems.

    I really don’t understand here.

    I will just say that the “laws followed by the biologic systems” are the general laws of biochemistry. But the configuration that allows to obtain specific results by those laws is the functional information, and points to design.

    A regular six-face dice has zero probability of an outcome higher than six because the system has been so designed

    Many events have zero probabilities in systems that are not designed. And random events do happen in designed objects, without having any special connection with the design. For example, random mutations do happen in genomes, but that has nothing to do with the design in the genome.

    It doesn’t matter if the determinism comes before the random event (as discussed) or after as in a twelve-face dice that is rolled again by an agent that seeks only one to six outcomes.

    ???

    In biology, if non-random “natural selection” acts upon “random mutations”, then the outcome is non-random – hence designed – as we observe in nature.

    Not in my world and not according my ideas and use of words. An outcome that is non random is not necessarily designed. A designed object is only an object whose configuration has been represented in the consciousness of a conscious intellugent and purposeful agent, before being outputted to the object. See my OP:

    Defining Design

    https://uncommondescent.com/intelligent-design/defining-design/

    There should be no doubt that Someone designs and builds the dice, Someone designs and builds the random generator (not easy), and Someone designs and runs the whole experiment.

    There is no doubt that I don’t understand what you are saying.

    Example? Based on outcome, randomness is impossible to determine (it’s simple math) http://nonlin.org/random-abuse/ . On the other hand, even a 10-bit sequence has a mere o.1% probability, so probabilities get extreme very quickly. Probabilities of randomness in biology are ridiculously low, therefore not even worth seriously discussing.

    Well, for me they are worth of a very serious discussion. Exactly because they “get extreme very quickly”.

    But of course there is no need for you to join the discussion.

  112. 112
    gpuccio says:

    Origenes:

    This new argument from Rumracket seems to be an appeal to the old attempt to discredit hypothesis testing in the name of the holy Bayesian truth! 🙂

    Mark Frank was very good at that, asking for the priors of design and non design hypotheses.

    Rumracket is digging for old evergreens with great zeal.

    My simple objection is very similar to yours: deciding if the existence of a biological designer is credible is not a matter of priors and probabilities: it is just a question of general worldviews about reality.

    The Bayesian argument, at least in this context, is just a way to camouflage philosophy as a probabilistic argument.

    Most current science goes on mainly by hypothesis testing, and rejecting null hypotheses. I happily go on with that, too. 🙂

  113. 113
    bill cole says:

    Origenes gpuccio

    Did no one tell them that intelligent design is not a random, but, instead, teleological process? There is no way to calculate the probability of intelligent design. Here is a meaningless question: what is the probability of Leonardo Da Vinci painting the Mona Lisa?
    That is, it is a meaningless question unless one proposes a random mechanism which creates paintings.
    What is the probability that this post contains an argument against a post made by Rumraket at TSZ? Nonsense question, unless one proposes that this post is created by e.g. a monkey banging away on a typewriter producing forum posts — in which case the probability is very low.
    I do hope no one seriously considers this to be a possibility ????

    Just thinking out load here. When we assign probability it is to assign the chance that the cause identified is the actual cause.

    When we see a functional sequence, what is the chance that it was generated by random change. If it is 500 bits our chance is zero.

    When we see a 500 bit functional sequence outside biology the chance it is designed is 100%.

    So why an exception for biology?

  114. 114
    Origenes says:

    Nonlin @

    Nonlin: You toss a coin and it always comes up Heads. Does that mean the coin is loaded? What does any other sequence of Heads and Tails tell us? When can we be certain that an outcome is random? In fact, we can never tell from the results whether an outcome is random or not because any particular sequence of outcomes has an equal probability of occurrence. If a coin is fair, 10 Heads in a row has a probability of about 1 in 1,000, but so does HTHTHTHTHT or HHHHHTTTTT or any other sequence of 10 tosses. We can get suspicious and investigate by other means whether the coin is loaded or not, but absent those other findings, the outcome does not tell us anything about the Randomness of this process.
    — Source: http://nonlin.org/random-abuse/

    Nonlin, I take it that you are not familiar with the law of large numbers (LLN). Allow me to give you some pointers:

    Scordova: It is the law that tells us systems will tend toward disorganization rather than organization. It is the law of math that makes the 2nd law of thermodynamics a law of physics. Few notions in math are accorded the status of law. We have the fundamental theorem of calculus, the fundamental theorem of algebra, and the fundamental theorem of arithmetic — but the law of large numbers is not just a theorem, it is promoted to the status of law, almost as if to emphasize its fundamental importance to reality.

    Wikipedia: the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.

    Scordova: As we examine sets of coins that are very large (say 10,000 coins), the outcome will tend to converge so close to 50% heads so frequently that we can say from a practical standpoint, the proportion will be 50% or close to 50% with every shaking of the set.

    – – – –

    More reading here.

  115. 115
    Origenes says:

    Bill Cole @114

    Bill Cole:
    When we see a 500 bit functional sequence outside biology the chance it is designed is 100%.

    So why an exception for biology?

    Because they desperately want there to be no God.

    Thomas Nagel:
    I speak from experience, being strongly subject to this fear myself: I want atheism to be true and am made uneasy by the fact that some of the most intelligent and well-informed people I know are religious believers. It isn’t just that I don’t believe in God and, naturally, hope that I’m right in my belief. It’s that I hope there is no God! I don’t want there to be a God; I don’t want the universe to be like that.

  116. 116
    bill cole says:

    gpuccio

    Here is a discussion with Entropy. I would like you thoughts.

    colewd:
    This statement means that random change did not find the wild type and in addition the sequence was very different.This is very significant to his position and does not require your straw-man to make it valid.

    your making my point without realizing it, which means that you’re not understanding what I wrote. The only way in which that part would be very significant for his position is if he thinks that only finding the wild type sequence will do, which is precisely the problem I mentioned and you called a straw-man. He thinks that the only solution is the one that has been sequenced from wild type phage. I say there’s plenty of solutions and we just happen to know of the ones that prevailed and that we have sequenced.

  117. 117
    gpuccio says:

    bill cole:

    The issue is very simple.

    The wildtype solution is 2000 times more efficient than the one that was found in the experiment.

    The solution found in the experiment was of course easy to find. It was a big hole, and that explains why it is easy to find, even with a small starting library.

    The wildtype is 2000 times more efficient and hugely smaller (more specific). According to the authors, 10^70 starting sequences would be necessary to find it.

    Of course NS has not special targets, but if it found the wildtype instead of the easy, gross solution, it is certainly very lucky.

    I have already made this point: is NS has no targets, how is it that so many sophisticated targets were found? Indeed, almost exclusively sophisticated and finely crafted targets.

    Where are the easy solutions, the sequences that can do things just by a few AAs specificity? How is it that we are surrounded almost exclusively by proteins with specificities in the range of hundreds and thousands of bits? Hundreds of specific and conserved AAs?

    He says:

    “I say there’s plenty of solutions”

    Well, there are certainly those that we observe. And other, probably.

    “and we just happen to know of the ones that prevailed”

    or just those that were found or designed

    “and that we have sequenced”

    well, we have sequenced quite a lot of them, now. A very good sample, certainly representative of the general thing. But of course there is still much to do.

  118. 118
    gpuccio says:

    bill cole:

    “Just thinking out load here. When we assign probability it is to assign the chance that the cause identified is the actual cause”.

    Just to be precise, that’s not exactly how hypothesis testing works.

    We observe some effect, that deserves an explanation. So, we express a theory about the possibl explanation of that effect. We call that theory H1.

    But we ask oursleves if the effect could just be the result of random noise in the data. We call this hypothesis the null hypothesis, H0.

    We try to model in the most appropriate way the random noise, so the we can compute:

    The probability of observing that effect, or a stronger one (IOWs the upper tail), if we assume that H0 is true.

    That probability is the p value for our hypothesis testing.

    If it is really low, we reject the null hypothesis.

    Does that mean that our H1 hypothesis is correct?

    Not necessarily. There could be some other explanation, let’s say H2, for the observed effect. However, the null hypothesis (a random cause for the observed effect) is anyway rejected.

    The choice between H1 and H2 is made considering their explanatory merits, but it is not merely probabilistic.

    So, in our case, the observed effect is the function. We reject the null hypothesis that the explanation for the function is RV. And our H1 is design, because design has the correct explanatory power.

    Neo-darwinism proposes an algorithm based on RV + NS. But once RV has been rejected as a possible cause for any complex function, NS is powerless for the reasons many times debated. NS can only optimize, in a very limited measure, an already existing function. It has no role in finding complex functions, nor can it optimize them is they have not been found by RV.

    Therefore, NS is easily falsifiable as H2.

    Design is and remains the best explanation, indeed the only one available.

  119. 119
    bornagain77 says:

    Question: If a 12-year-old kid solving three Rubik’s Cubes while juggling is to certainly be considered an impressive feat of ‘hitting a predetermined target’ (i.e. of Intelligent Design),,,

    12-year-old kid solves three Rubik’s Cubes while juggling – video
    https://www.liveleak.com/view?t=kfCSB_1523555110

    ,,, then why is not simultaneouly solving hundreds, (if not thousands), of ‘protein folding Rubik’s cubes’ in each of the trillions of the cells of each of our bodies not also to certainly be considered an impressive feat of ‘hitting a predetermined target’ (i.e. of Intelligent Design)???

    Rubik’s Cube Is a Hand-Sized Illustration of Intelligent Design – Dec. 2, 2014
    Excerpt: The world record (for solving a Rubik’s cube) is now 4.904 seconds,,,
    You need a search algorithm (for solving a Rubik’s cube).,,,
    (Randomly) Trying all 43 x 10^18 (43 quintillion) combinations (of a Rubik’s cube) at 1 per second would take 1.3 trillion years. The robot would have a 50-50 chance of getting the solution in half that time, but it would already vastly exceed the time available (about forty times the age of the universe).,,,
    How fast can an intelligent cause solve it? 4.904 seconds. That’s the power of intelligent causes over unguided causes.,,,
    The Rubik’s cube is simple compared to a protein. Imagine solving a cube with 20 colors and 100 sides. Then imagine solving hundreds of different such cubes, each with its own solution, simultaneously in the same place at the same time (in nanoseconds). (That is exactly what is happening in each of the trillions of cells of your body as you read this right now).
    http://www.evolutionnews.org/2.....01311.html

    The Humpty-Dumpty Effect: A Revolutionary Paper with Far-Reaching Implications – Paul Nelson – October 23, 2012
    Excerpt: Put simply, the Levinthal paradox states that when one calculates the number of possible topological (rotational) configurations for the amino acids in even a small (say, 100 residue) unfolded protein, random search could never find the final folded conformation of that same protein during the lifetime of the physical universe.
    http://www.evolutionnews.org/2.....65521.html

    Physicists Discover Quantum Law of Protein Folding – February 22, 2011
    Quantum mechanics finally explains why protein folding depends on temperature in such a strange way.
    Excerpt: First, a little background on protein folding. Proteins are long chains of amino acids that become biologically active only when they fold into specific, highly complex shapes. The puzzle is how proteins do this so quickly when they have so many possible configurations to choose from.
    To put this in perspective, a relatively small protein of only 100 amino acids can take some 10^100 different configurations. If it tried these shapes at the rate of 100 billion a second, it would take longer than the age of the universe to find the correct one. Just how these molecules do the job in nanoseconds, nobody knows.,,,
    http://www.technologyreview.co.....f-protein/

  120. 120
    gpuccio says:

    BA:

    Good point! 🙂

  121. 121
    gpuccio says:

    To all:

    Now, I would like to make a few mathematical considerations about the argument, emerged many times from the other side, that the observation (however rare) of more than one independent complex solution for a function (or, maybe, for related functions) is an argument in favour of neo-darwinism.

    I will try to show that the opposite is true.

    Those who don’t love numbers, or who have problems with exponential measures, should probably avoid to read this comment, because it’s absolutely about numbers, and big numbers.

    The important premise is:

    There are two key factors in evaluating the probability of a specific functional outcome (like an observed functional protein) in a system (like the biological system) by RV.

    1) The first factor, as we know very well, is the functional information in the sequence. That measure already describes two of the important concepts: the size of the functional island and the size of the search space. Indeed, it corresponds to the rate between the two.

    2) The second factor, often underemphasized, are the probabilistic resources of the sytsem. They can be simply defined as the number of different states that can be tested by the system, by RV, in the allotted time.

    Neo-darwinists have tried to convince us, for decades, that the probabilistic resources of our biological planet are almost infinite, guven the bif times and so on. But that’s not true.

    The probabilistic resources of our biological planet are not small, but they are certainly not huge, least of all almost infinite. They are, indeed, very finite, and well computable, at least approximately.

    In my OP:

    What are the limits of Random Variation? A simple evaluation of the probabilistic resources of our biological world

    https://uncommondescent.com/intelligent-design/what-are-the-limits-of-random-variation-a-simple-evaluation-of-the-probabilistic-resources-of-our-biological-world/

    I have offered, in the first table, such a computation. The numbers there are not realistic at all: they have been computed as a higher threshold, making all possible assumptions in favour of the neo-darwinian scenario. They certainly overestimate the real resources, and by far.

    However, I will stick to those mubers, that I have offered myself.

    Now, while the functional information in the protein is certainly the key factor, the second important point is: the rate between that functional information and the probabilistic reasources of the system. The bigger the difference between the functional information and the probabilistic resources, the more the improbabiltiy of what we observe increases. Exponentially.

    So, let’s reason with some numbers.

    I will consider some protein A, whose functional information is 500 bits. Our usual generic threshold for complexity in functional information. That means that, whatever the size of the target space (the functional island) or of the search space (the ocaen), their ratio is 2^-500.

    OK, now let’s go to the probabilistic resources. We will consider the most favorable scenario for noe-darwinism: the bacterial system.

    My (extremely generous) estimate for the number of states that can be reached by the bacterial system by RV in the whole life of our planet is 2^139 (139 bits). It’s 138.6 in my table, but I will round it. There is no need here to add the 5 sigma additional bits, because we will explicitly compute probabilities in our example.

    So, our scenario is:

    Functional information in the protein: 500 bits (3e-151)

    Probabilistic resources: 139 bits (7e+41)

    Independent complex solutions observed for that function: 1

    Probability of finding the observed solution in one attempt: 3e-151

    Probability of finding the observed solution using all the available attemps (7e+41): 2.1e-109

    This probability is computed using the binomial distribution: it is the probability of getting at least one success with that probability of success and that number of attempts.

    OK, that’s not good fo neo-darwinism, of course.

    One argument frequently raised by them as discussed in th OP, is that there could be many independent solutions in the protein space for that function. And that is probably true, in many cases. They should, however, be of comparable complexity, because if they were much simpler, we would expect to find the simpler solutions in the proteome, and not the complex ones.

    But what if we observe two independent complex solutions for the same function? Our interlocutors argue that this is an argument in favor of neo-darwinism, because it demonstrates that it is not so difficult to find complex solutions.

    I say, instead, that observing two, or mote, independent complex solutions for the same function is a stringent argument in favour of design.

    Let’s see why.

    Now our scenario is similar to the previous one. The only difference is that we observe not one, but two independen complex solutions. Let’s say each one of them of 500 bits of FC, but different.

    For the moment, let’s assume that those two solutions are the only ones that exist in the search space.

    Then the scenario now is:

    Functional information in each protein: 500 bits (3e-151)

    Functional information in the two islands summed: 499 bits (6e-151)

    Probabilistic resources: 139 bits (7e+41)

    Independent complex solutions observed for that function: 2

    Probability of finding at least two solutions using all the available attemps (7e+41): 8.82e-218

    So, we had a probability of 2.1e-109 for one observed solution, and we have now a probability of 8.82e-218 for two observed solutions!

    A probability that is more than 100 orders of magnitude smaller!

    But, of course, neo-darwinists will say that the independent complex solutions are many more than two.

    OK, always using the binomial distribution, how many indepepndent complex solutions of 500 bits of FI each do we nedd to have, with two observed solutions, the same probabilities as we had for one observed solution assuming that it was the only one?

    The answer is: about 10^54 independent complex solutions!

    Not to have a good probability: just to have the same probability computed for one observed solution if it were the only one: about 2.1e-109!

    That’s all, for the moment. Enough numbers for everyone! 🙂

  122. 122
    uncommon_avles says:

    Dear all who have responded to me, I find that the entire argument for ID is based on your personal incredulity . It isn’t difficult to attain complex system ‘bits’. Just look at how slime mold with no brains can take its own ‘decision’, navigate complex maze etc.
    Nature Slime Mold
    I think a lot of pseudo science has confused ID supporters.

  123. 123
    ET says:

    uncommon_alves,

    If what you say had any merit you should be able to easily refute our claims.

    Just how do slime molds help the case for blind and mindless processes?

  124. 124
    Origenes says:

    GPuccio

    Can you help me out here? At TSZ I have written a post about your reasoning concerning functionality of certain proteins. I quote you saying:

    GP: The reason why I stick usually to the vertebrate transition is very simple: it is much older.
    There, we have 400+ million years.
    With mammals, much less. Maybe 100 – 130 million years.
    Which is not a short time, certainly.
    But 400 is better.
    400 million years guarantees complete and full exposure to neutral variation. That can be easily seen when Ka/Ks ratios are computed. The Ks ratio reaches what is called “saturation” after 400 million years: IOWs, any initial homology between synonymous sites is completely undetectable after that time.
    That means that what is conserved after that time is certainly conserved because of functional constraint.
    While 100 million years are certainly a lot of time for neutral variation to occur, still it is likely that part of the homology we observe can be attributed to passive conservation.
    IOWs, let’s say that we have 95% identity between humans and mouse, for some protein. Maybe some of that homology is simply due to the fact that the split was 80 million years ago: IOWs, some AA positions could be neutral, but still be the same only because there was not enough time to change them.
    Of course, the bulk of conserved information will still be functionally constrained, but probably not all of it.

    The response I got from Entropy and Dazz is rather surprising. Both are extraordinarily impressed by the saturation that stems from all those mutations and what it tells us about the power of evolution …

    Entropy: In order to guarantee full exposure to neutral variation, there must be an enormous amount of mutations. Remember that mutations are mostly random. Thus, in order to touch every site, we need a number of mutations well above the length of the sequences. This is where saturation of synonymous sites comes into play, they’d show that there’s been quite a number of mutations. So, if synonymous substitutions have reached saturation, that means that an enormous chunk of sequence space was “explored.”
    If we were to accept gpuccio’s assumptions, the conclusion would be that evolution performs more-than-enough exploration of sequence space to explain “jumps” in “functional” information (or whatever wording gpuccio might like today).

    I did push back a little, but to no avail …

    Dazz: If that was true, wouldn’t that mean the “probabilistic resources” would be enough to traverse the entire search space and produce tons of “bits of information.”

    Entropy (on Dazz): Somebody seem to be getting it!

  125. 125
    bill cole says:

    uncommon_alves

    Dear all who have responded to me, I find that the entire argument for ID is based on your personal incredulity

    Do you understand the biochemical mechanisms of slime mold? Where did the genetic information come from for slime mold to navigate the maze?

    Is your answer random variation plus natural selection?

    A human fertilized egg can divide from a single cell and eventually become a human being. This is maybe more impressive then slime mold but like slime mold requires genetic information that we can count in bits. Like slime mold it does not start out with a brain but builds one from cell division alone.

    The only cause of information we know of is design. Can you come up with another cause other then s-happens? What side do you really think is pseudo science?

  126. 126
    ET says:

    uncommon_alves:

    There are no predefined targets in evolution. There is no plan to “shoot the green brick”

    That just exacerbates the problem. It doesn’t help.

    You expect us to believe- without evidence or a means to test the claim- that irreducibly complex functioning protein complexes just happened due to some differential accumulations of genetic accidents? Really?

  127. 127
    bill cole says:

    Origines

    Dazz: If that was true, wouldn’t that mean the “probabilistic resources” would be enough to traverse the entire search space and produce tons of “bits of information.”

    This is a very faulty assumption that Dazz and Entropy are making.

    Your arguments were very solid and I am impressed. If you read gpuccio’s last post he takes you through the arguments.

    Evolution cannot traverse even a minuscule fraction of the search space of a single protein. As GP mentioned in the above argument maybe 10^50 searches.

    The search space of a single ATP synthase protein is greater the 10^500. My experience here is that even very smart people have trouble seeing how large this number really is and that is the point of gpuccios last post.

    We have to be very patient here as it will take time to work through this. You are doing a very good job of arguing at TSZ and keeping the emotions under control.

    Entropy is a smart guy but he has not really thought through this very difficult mathematical problem. It looks like he is trying to understand it and that is the first step.

  128. 128
    gpuccio says:

    Origenes:

    Of course, they don’t know what they are talking about.

    Saturation at synonymous sites means that each site has been exposed to mutations so that no homology can anymore be detected bewteen the sequences of synonimous sites. At that point, we cannot any more distinguish between a divergence of 400 miliion years or a divergence of 2 billion years.

    Of course each site in a protein sequence is exposed to mutation after, say, 400 million years. The rate of mutations per site is something between 1 and 3, in most cases. The only reason that functional sites do not change is that they are functionally constrained, and therefore negative selection keeps them by eliminating variation. IOWs, mutations do happen at those sites too, but they are eliminated, and are practically never fixed.

    What is really sad is that both Entropy and dazz demonstrate, once more, that they do not understand the basics of the issue.

    Entropy says:

    “So, if synonymous substitutions have reached saturation, that means that an enormous chunk of sequence space was “explored.””

    And dazz immediately echoes:

    “If that was true, wouldn’t that mean the “probabilistic resources” would be enough to traverse the entire search space and produce tons of “bits of information.””

    I am almost reluctant to point out their blatant error, so obvious is it!

    However, it seems that I have to do exactly that.

    Let’s try with a simple example:

    372944389420147

    This is a 15 figures sequence in base 10.

    Now, let’s say that the sequence is completely neutral, without any functional constraint. And let’s say that I can operate one substitution per site per minute.

    For simplicity, I will make 15 substitution in the 15 different sites. This is not a requisite, of course, but it makes the explanation easier. I will also go in order, always for the sake of clarity.

    So we get, in 15 minutes, the following results (the mutation is in bold):

    1) 572944389420147
    2) 522944389420147
    3) 528944389420147
    4) 528644389420147
    5) 528674389420147
    6) 528672389420147
    7) 528672989420147
    8) 528672959420147
    9) 528672950420147
    10)528672950820147
    11)528672950840147
    12)528672950845147
    13)528672950845847
    14)528672950845817
    15)528672950845812

    The original sequence and the final sequence have completely diverged. No homology is any more detectable. We have saturation:

    372944389420147
    528672950845812

    Now, I will ask a few very simple questions that even Entropy and dazz should be able to answer:

    a) How many different states have we reached?

    (Answer: 15)

    b) How many different states exist in the search space?

    (Answer: 10^15)

    c) What “enormous chunk of sequence space” have we explored?

    Answer: 15:10^15 = 1.5e-14

    I think that no further comments are needed.

  129. 129
    gpuccio says:

    bill cole:

    “Entropy is a smart guy but he has not really thought through this very difficult mathematical problem. It looks like he is trying to understand it and that is the first step.”

    I appreciate your patience. But if he doesn’t understand the basics of combinatorics, why is he even discussing these issues?

    Isn’t that mere arrogance?

  130. 130
    bill cole says:

    gpuccio

    I appreciate your patience. But if he doesn’t understand the basics of combinatorics, why is he even discussing these issues?

    Isn’t that mere arrogance?

    All these guys assume that ID people are stupid as the smear complain has tried to paint this image in order to slow the momentum. IDiots is their slogan for ID guys.

    He has not thought through the problem as he did not believe it existed. If your perspective is philosophical atheism you assume the simple to complex model of evolution to be true and don’t take counter argument seriously.

    The interesting point is if enough evidence that contradicts the philosophy can change the philosophy.

  131. 131
    bill cole says:

    gpuccio origenes

    Okay, given what he’s trying to explain, that’s a truly horrendous error.

    Here’s a tip, folks.
    At one mutation per site, 36% of the sites will be unchanged.
    At three mutations per site, 5% of the sites will be unchanged.
    Ironically, this was another thing I tried to explain to him in 2014.
    It’s e^-n
    And he’s lecturing us on combinatorics.
    ROFL

    This is Jock trying to create unnecessary confusion and to gain the intellectual high ground. Another logical fallacy. Whats amazing is he thinks he can now get away with it

    Gpuccio was simulating AA substitutions and Jock changes the argument to nucleic acid (substitutions). A simple straw-man fallacy. Thoughts?

  132. 132
    Allan Keith says:

    Gpuccio, I am late to the party so I apologize if this has already been brought up. But first I want to commend you on a well written and thought out OP.

    I want to read this in greater detail before I venture too far, but I would like to comment on the TSS argument, specifically the green bricks. If I am reading this correctly (no guarantee), you are still looking at the green bricks from a post-hoc perspective. i.e., the green bricks represent specific targets, even though the shooter didn’t see them in advance. From this perspective, I agree that hitting these three green bricks by taking three blind shots would be highly improbable. However, let’s add an additional one hundred green bricks (as per your description). Blindly taking three shots has a much higher probability of hitting any three green bricks than blindly hitting three pre-specified green bricks. This is a better analogy to how evolution is purported to proceed.

    We can look at these green bricks as all of the possible “targets” that could provide some advantage. Once hit, it is more likely to be preserved in the next generation of wall. And with every generation, three more blind shots are taken, and so on. How many generations will it take before all of the green bricks have been hit at some time throughout the generations? What I am trying to say is that three bricks do not have to be hit all at the same time. The odds improve even more if the wall has multiple offspring per generation.

    Obviously this is absurdly over-simplified. For example, as the conditions change (environmental, competition, etc.) the number and location of these green bricks on the wall will change. But it has had the benefit of getting the image of copulating brick walls into your heads. 🙂

  133. 133
    Nonlin.org says:

    Origenes@115

    Why would you assume I am “not familiar with the law of large numbers (LLN)” when you didn’t bother to understand the point I am making or to open the link provided to make sure you didn’t misread?

    Very funny. You even quote my passage, but missed or misunderstood: “the outcome does not tell us anything about the Randomness of this process”. Do you get it now?

  134. 134
    bill cole says:

    gpuccio

    Gpuccio was simulating AA substitutions and Jock changes the argument to nucleic acid (substitutions). A simple straw-man fallacy. Thoughts?

    Jock answered me and he is claiming that there is no difference it is just straight combinatorial statistics. So it will take a long time to make the final substitution based on random mutation.

    In the end this is just a diversionary tactic on this part as you could easily define saturation as 99% changed.

  135. 135
    bill cole says:

    Allan

    Gpuccio, I am late to the party so I apologize if this has already been brought up. But first I want to commend you on a well written and thought out OP.

    The analogy was nothing more then addressing the TSS fallacy. It was never meant to be an analogy for evolution.

    If you want to try to make it an evolutionary analogy, you can start with a wall of 20^20000 bricks. Best of luck to you 🙂

  136. 136
    Origenes says:

    Nonlin @134

    Nonlin.org: Very funny. You even quote my passage, but missed or misunderstood: “the outcome does not tell us anything about the Randomness of this process”. Do you get it now?

    Well, your claim doesn’t hold up due to the law of large numbers.
    It seems that I have to spell it out for ya.
    Okay, let me quote Scordova again:

    Scordova: As we examine sets of coins that are very large (say 10,000 coins), the outcome will tend to converge so close to 50% heads so frequently that we can say from a practical standpoint, the proportion will be 50% or close to 50% with every shaking of the set.

    IOWs, Nonlin, given a large enough set, the outcome, contrary to your claim, does tell us something about the randomness of the process.
    Bottom line: if after 10.000 trials we do not have a result close to 50% heads, then we know that something is rigged — that heads and tails is not random.

  137. 137
    gpuccio says:

    bill cole:

    I think he refers to my example with the 15 figures sequence. OK, I expected something like that, given the intellectual and moral level of the discussion there.

    I had said, rather clearly I believe:

    “For simplicity, I will make 15 substitutions in the 15 different sites. This is not a requisite, of course, but it makes the explanation easier. I will also go in order, always for the sake of clarity.”

    The purpose was (and is) of course to show clearly that going from one sequence ot another, completely different, did not require at all traversing a great portion of the search space. Which should be evident to all.

    Of course, if we admit mutations with repetition (which would be the case with random mutations), some more are needed. DNA_Jock, who is apparently more interested in finding fault with me than in making some serious argument, says that the number is e^-n. I don’t understand that, and I think he should explain better and give references.

    I think instead that the number of expected mutations to change all the sites can be computed by the Coupon collector’s problem formula. See also here:

    https://en.wikipedia.org/wiki/Coupon_collector%27s_problem

    That would give 35 tries for my 15 figures sequence.

    For a 100 figures sequence it would give about 500 tries.

    So, for my example, the exploration of the search space would be:

    35:10^15 = 3.5e-14

    A big difference indeed.

    All this is irrelevant and ridiculous. The simple truth is that the number of mutations necessary to cancel homology is, as said, about 1-3 per site, and that can be achieved with a number of total mutations a few times the length of the sequence.

    Not “an enormous chunk of sequence space explored”. In all cases, it’s an infinitesimal chunk.

  138. 138
    Nonlin.org says:

    gpuccio@112
    Thanks for replying – I thought you just didn’t see my comment@85.

    Many misunderstandings are due to poorly defined words that likely mean different things to different people. Like what is “complex”?

    Yes, “The content of design in unpredictable, because it depends on the desires and cognitive abilities of the designer”, but the only way you can label something “designed” is to see that it is non-random, i.e. it follows certain rules – those imposed by the designer. In other words, design = regularity. Example: Paley’s watch will have regular shapes and uniform materials that look different than a random pile of matter. You look at a sand dune or a sand garden – close-up it’s just “random” grains of sand, but wide-angle you see patterns that beg for an explanation. Can someone design a sand garden to look like a naturally occurring sand dune? Sure, and they’re indistinguishable (because they’re both designed if you ask me)!

    You say: “We must distinguish between usual randomness and quantum randomness.” – but this doesn’t make sense to me because “randomness” is ONLY a theoretical concept (like line, circle and point) – we can never determine something to be “random” – again, see: http://nonlin.org/random-abuse/ . Also, what we call “random” is never completely undetermined – all such phenomena have a deterministic element – at a minimum their statistical distribution and boundaries (no six face die will ever come up seven).

    You say: “An outcome that is non random is not necessarily designed.” How so? Provide example. If you think the sand dune is determined by “natural forces” and the “laws of physics”, then how do you know that it’s not ultimately designed?

  139. 139
    bill cole says:

    gpuccio

    All this is irrelevant and ridiculous. The simple truth is that the number of mutations necessary to cancel homology is, as said, about 1-3 per site, and that can be achieved with a number of total mutations a few times the length of the sequence.

    I understand your argument and my initial take of Jock’s argument was the same. Yes, with random change more variables are required but again a tiny fraction with the sequence space.

    Now, I will ask a few very simple questions that even Entropy and Dazz should be able to answer:

    This is what “tweaked” these guys.

    Entropy agreed that the study of genetic information is an important academic endeavor and thats what you are doing so I am ok with the progress so far.

  140. 140
    Nonlin.org says:

    Origenes@137

    You still don’t get it. I give you a 10000 trials as follows 101010…10. Can you say it’s “random”?

    Also, in real life you’re looking at biological black boxes. You have no prior idea what the stats should be.

    See? That’s what happens when you don’t read.

  141. 141
    gpuccio says:

    Allan Keith at #133:

    But first I want to commend you on a well written and thought out OP.

    Thank you.

    If I am reading this correctly (no guarantee), you are still looking at the green bricks from a post-hoc perspective. i.e., the green bricks represent specific targets, even though the shooter didn’t see them in advance. From this perspective, I agree that hitting these three green bricks by taking three blind shots would be highly improbable. However, let’s add an additional one hundred green bricks (as per your description). Blindly taking three shots has a much higher probability of hitting any three green bricks than blindly hitting three pre-specified green bricks. This is a better analogy to how evolution is purported to proceed.

    In the OP I have computed two different probabilities: for 100 green bricks (all of them hit), and for 50 (half hit). I don’t think I have ever discussed 3 hits, but I admit that in the image there were only three, do that has probably confused you. No problem.

    Of course, the probability can be computed for any number of hits on any number of targets, and of course a crucial factor is the total number of bricks (the search space).

    The wall analogy had only one purpose, as told many times: to show that the TSS fallacy does not apply to good post-hoc specifications.

    We can look at these green bricks as all of the possible “targets” that could provide some advantage.

    IOWs, all naturally selectable targets. The advantage must be a reproductive advantage.

    But not all naturally selectable advantages are the same, of course. Antibiotic resistance gives reproductive advantage, in some conditions. But it is a simple trait. On the wall, it would be a huge green sector. It’s very easy to hit it.

    Not so ATP synthase, for example. Or most functional proteins, some more than others.

    They are like microscopic green points on the wall. Impossible to shoot them from a distance.

    As I have said many times, if we observe a complex target in a huge search space, it’s practically impossible that it has been shot by chance. That’s the essence of ID.

    Once hit, it is more likely to be preserved in the next generation of wall. And with every generation, three more blind shots are taken, and so on. How many generations will it take before all of the green bricks have been hit at some time throughout the generations? What I am trying to say is that three bricks do not have to be hit all at the same time. The odds improve even more if the wall has multiple offspring per generation

    This is a very common error. Nobody says that the bricks have ot be hit “at the same time”.

    As said, the analogy of the bricks is not a general model of the biological system. However, a green brick, if we want to make a generic connection between the two scenarios, could represent a complex protein. Shooting a functional island is almost impossible, because it is really extremely small as compared to the search space and to the probabilistic resources (the number of shots).

    However, I would not insist on the analogies between green bricks and proteins. As said, the only purpose was to state that they show the same kind of correct post-specification. You can find my treatment of the protein problem elsewhere, both in the OP and in the discussion, and in previous OPs, many times quoted.

  142. 142
    Origenes says:

    Nonlin @

    Nonlin: You still don’t get it. I give you a 10000 trials as follows 101010…10. Can you say it’s “random”?

    Yes of course. If, after 10.000 trials, we have 50% “1”, then this is consistent with 1 and 0 production being random. Why is this so difficult for you?

    Question for Nonlin: if, after 10.000 trials, the outcome is 10% 1 (and 90% 0), what does that tell you about the “randomness” by which 1s and 0s are being produced?
    According to your claim “nothing”. Do you now understand that this is wrong?

  143. 143
    gpuccio says:

    bill cole:

    “I understand your argument and my initial take of Jock’s argument was the same. Yes, with random change more variables are required but again a tiny fraction with the sequence space.”

    In my example the change was random just the same, but I applied it at one site per time, for simplicity. I did not want to rum a simulation, just to show that a pathway from one sequence to another is a really small set of sequences, while the search space is combinatorial, and therefore it increases hugely with the length of the sequence.

    Entropy and dazz seemed to be under the starnge illusion that the simple facts that all sites underwent mutations in some evlutionary time demosntrated that the search space had been traversed. That was a completly senseless idea, and I have made a simple example to make them realize that. That’s all.

    In a bacterial system, all positions in the genome undergo mutations in a relatively short time. The general mutation rate is about 10^-9 per replication per site, especially in microrganisms. In the Lensky experiment, if I am not wrong, all possible single substitutions must have been reached easily enough.

    What Entropy and dazz seem not to understand is that one thing is to change each single position in a sequence (something that can be easily attained with a relatively small number of mutations) and all another thing (but really all another thing) is to reach all the possible combinatorial states of a long sequence.

    For a 150 AAs sequence, those states are 20^150, 1.427248e+195, 648 bits. That’s more than all the states of elementary particles in the whole universe in 15 billion years!

    How can anyone think that “an enormous chunk of that sequence space has been explored”?

    That’s why I said, correctly, that both Entropy and dazz do not understand combinatorics. And if they don’t understand combinatorics, how can they understand ID, least of all discuss it?

    I think that DNA_Jock understands combinatorics, even if his statement about the “e^-n” solution remains rather obscure to me. I would be happy to understand it, if he explained it. In case, please let me know.

    But again, the simple fact that he does not acknowledge the blatant error in the statements made by his fellows, and just tries to find fault with what I have said, is sad evidence of his attitude in this discussion.

    Another point that I would like to clarify, because ot could be a source of confusion, is that we must distinguish between two different counts of mutations in the biological system:

    1) The mutations that take place in alll living organisms. As said, a generally accepted mutation rate is 10^-9 per replication per site. Assuming approproate population sizes and replication times and mean genome length and evolutionary times available, it is perfectly possible to compute higher thresholds for the total number of mutations that could take place on our planet in its lifetime. That’s what I have done, with extreme generosity, in my table in the OP about the limits of RV.

    That’s what my number of 139 bits means. The total number of possible states tested on earth (higher threshold, by far). The probabilistic resources of our global system.

    2) Another concept is the mutations that we observe in the proteome of organisms. Those are the mutations that have been fixed, and they are of course a tiny subset of all the mutations that have happened.

    Neutral mutations, in particular, are fixed by drift, a completely random process. Of corse, not all neutral mutations are fixed. And it takes time for a mutation to be fixed by drift.

  144. 144
    gpuccio says:

    bill cole:

    “Entropy agreed that the study of genetic information is an important academic endeavor and thats what you are doing so I am ok with the progress so far.”

    OK, let me know how it evolves! 🙂

    And, please, if there are news from Joe Felsestein about that old issue. You know, the thief… 🙂

  145. 145
    Origenes says:

    Allan Keith @133, GPuccio @142

    Allan Keith: If I am reading this correctly (no guarantee), you are still looking at the green bricks from a post-hoc perspective. i.e., the green bricks represent specific targets, even though the shooter didn’t see them in advance.

    I urge GPuccio to correct me if I am wrong, but I think Allan gets the analogy wrong here. The whole idea of the green brick analogy, as I understand it, is that there are two distinct explanations for the outcome: ‘random shooting’ and ‘aimed shooting.’ IOWs we do not make a priori assumptions about what the shooter sees or not sees, as Allan does.

    In fact if we see, on post hoc inspection, that the bullets have only hit green bricks, then we have support for the explanation ‘aimed shooting’ — which is, of course, the whole point.

  146. 146
    gpuccio says:

    Origenes:

    Of course, the outcome and its probability in the system are the basis for the design inference. However, in my OP I have described three different scenarios.

    Only in the first we can look at the wall before the shootinh. That’s how we know that the targets are alredy on the wall: it’s a case of pre-specification.

    In the other two scenarios, we cannot look at the wall before the shooting: both are cases of post-specification.

    However, the difference is that in the second scenario the targets are painted on the wall afetr the shooting: that violoates noth the rules for a correct post-specification. The targets are not objectively part of the wall, and to paint them we need to use the contingent information in the outcome (we have to know where each bullet is, to paint the target around it).

    In the thord scenario, we just acknowledge that some specific tragets that are objectively part of the wall have been hit. We did not know that before observing the outcome. We understand the target observing the outcome, but both the rules for a correct post-specification are satisfied: the targets are an objective part of the wall, and we are no using the contingent information about the shots to paint anything.

    The purpose of the reasoning is to show that pos-specifications, if they respect those two tules, are as valid as pre-specifications.

    One important point is that this reasoning is about identifying correctly the targets, and not about computing the probabilities. Once we confirm that our targets are real targets, valid targets, then we can compute the probabilities. And decide if we can infer design (or aiming).

    For example, in the second scenario we could still compute probabilities for our painted targets, and infer design, but we would be wrong, not because our compuattion is wrong, but because our targets are not real targets.

    But in the third scenario, targets are good, and the computation is as valid as in the first scenario.

    Again, the example is not about the validity of the computation, but about the validity of the targets.

    My second point instead:

    “The objection of the different possible levels of function definition.”

    is more connected to the computation itself, and shows that it must consider the upper tail for the observed effect.

  147. 147
    Origenes says:

    GPuccio @147

    There is a first time for everything 🙂 — we are talking past each other.

    GP: Only in the first we can look at the wall before the shootinh. That’s how we know that the targets are alredy on the wall: it’s a case of pre-specification.

    In the other two scenarios, we cannot look at the wall before the shooting: both are cases of post-specification.

    My concern is not about the “we” in your story. My point is that we do not make a priori assumptions about what the shooter sees or not sees. Whether we look before or after the shooting or not, is irrelevant for what the shooter sees.

    And, again, if we observe, post hoc of course, that only green bricks are hit, we have support for the idea that the shooter has seen those green bricks and has shot with aim.

  148. 148
    gpuccio says:

    Origenes:

    Oh, yes!

    I did not realize that Allan Keith was referring to the shooter.

    Of course we don’t know anything about what the shooter sees or does, we must infer that from the outcome.

    Thank you for clarifying that! 🙂

  149. 149
    gpuccio says:

    bill cole:

    I have checked with simulations that the number of attempts expected to change all the sites in a sequence by random mutations, if all the sites have the same probability of mutation, is well described by the Coupon collector’s problem:

    https://en.wikipedia.org/wiki/Coupon_collector%27s_problem

    See also the graph in the Wikipedia page.

    As you can see, the expected number varies from 2 to 5 times the sequence length, for sequences approximately between 2 and 100.

    Even for a sequence of 1000, the expected number is about 6-7 times the sequence length.

    So, in all cases, as already said:

    “It’s an infinitesimal chunk”

  150. 150
    DATCG says:

    Gpuccio, once again, great OP.

    Thanks for your efforts in providing answers and/or rebuttals to different questions and opposing points. Enjoyed reading the OP and comments.

    And thanks for reviewing/rebutting ye olde Deck of Cards Fallacy as well.

    Are neo-Darwinist using Deck of Card fallacy, missing what Organized Specificity is: The Three Subsets of Functional Sequence Complexity(FSC).

    Or intentionally misleading others who may not understand the slight of hand? I realize some might think it’s a valid argument.

    Are they not familiar with FSC? Or, make the mistake of equating Random Sequence Complexity(RSC) with FSC?

    It seems they’re forced into making a case of the absurd through a random series of card tricks.

    But randomness: RSC not equal to FSC – Function.

    I think neo-darwinism has been/is on a fast track to nowhere.

  151. 151
    DATCG says:

    Gpuccio,

    Came across a paper several weeks ago. Hesitated to add it to your Ubiquitin, Semiosis OP.

    But hope it’s OK to drop it here as I think it adds significance to your case on how life gets more difficult by the day for neo-darwinism, random mutations and natural selection…

    Case for the genetic code as a triplet of triplets
    Fabienne F. V. Chevance and Kelly T. Hughes
    PNAS April 17, 2017

    http://www.pnas.org/content/ea.....1614896114


    Significance

    The genetic code for life is a triplet base code. It is known that adjacent codons can influence translation of a given codon and that codon pair biases occur throughout nature.

    We show that mRNA translation at a given codon can be affected by the two previous codons.

    Data presented here support a model in which the evolutionary selection pressure on a single codon is over five successive codons, including synonymous codons.

    This work provides a foundation for the interpretation of how single DNA base changes might affect translation over multiple codons and should be considered in the characterization of the effects of DNA base changes on human disease.

    Abstract

    The efficiency of codon translation in vivo is controlled by many factors, including codon context. At a site early in the Salmonella flgM gene, the effects on translation of replacing codons Thr6 and Pro8 of flgM with synonymous alternates produced a 600-fold range in FlgM activity.

    Synonymous changes at Thr6 and Leu9 resulted in a twofold range in FlgM activity. The level of FlgM activity produced by any codon arrangement was directly proportional to the degree of in vivo ribosome stalling at synonymous codons.

    Synonymous codon suppressors that corrected the effect of a translation-defective synonymous flgM allele were restricted to two codons flanking the translation-defective codon.

    The various codon arrangements had no apparent effects on flgM mRNA stability or predicted mRNA secondary structures.

    Our data suggest that efficient mRNA translation is determined by a triplet-of-triplet genetic code.

    That is, the efficiency of translating a particular codon is influenced by the nature of the immediately adjacent flanking codons.

    A model explains these codon-context effects by suggesting that codon recognition by elongation factor-bound aminoacyl-tRNA is initiated by hydrogen bond interactions between the first two nucleotides of the codon and anticodon and then is stabilized by base-stacking energy over three successive codons.

    Interesting….

    Changing the codon on one side of the defective codon resulted in a 10-fold increase in FlgM protein activity.

    Changing the codon on the other side resulted in a 20-fold decrease. And the two changes together produced a 35-fold increase.

    “We realized that these two codons, although separated by a codon, were talking to each other,” Hughes says. “The effective code might be a triplet of triplets.”

    Natural Selection, gets weaker…

    The difficulty for natural selection would be in finding codon optimization for a given gene. If the speed through a codon is dependent on the 5? and 3? flanking codons, and the flanking codons are dependent on their 5? and 3? flanking codons, then selection pressure on a single codon is exerted over five successive codons, which represent 615 or 844,596,301 codon combinations.

    To keep this in perspective, remember the Spliceosome and One Gene -> Many Proteins.

    If modified tRNAs interact with bases in a codon context-dependent manner that differs among species depending on differences in tRNA modifications, ribosome sequences, and ribosomal and translation factor proteins, it is easy to understand why many genes are poorly expressed in heterologous expression systems in which codon use is the primary factor in the design of coding sequences for foreign protein expression.

    The potential impact of differences in tRNA modifications represents a significant challenge in designing genes for maximal expression whether by natural selection or in the laboratory.

    Yep… more specificity matters….

    The tRNA molecules of every organism are modified extensively, and the majority of modifications occur at the antiwobble position of the anticodon loop and at the base immediately 3? to the anticodon (18). [Thirteen other bases positions are modified to a lesser extent in tRNA species of E. coli and Salmonella enterica (7).]

    The base adjacent to the 3? anticodon position, the “cardinal nucleotide,” also varies among species and is thought to affect codon recognition significantly (19).

    These modifications influence the stacking energy of the bases during codon–anticodon pairing (3).

    The translation proofreading steps catalyzed by EF-Tu and EF-G, which “sense” hydrogen bonding and stacking energy to determine if the correct codon–anticodon pairing has occurred, are influenced by the adjacent codons, possibly resulting in the codon-context effects we observe.

    Moreover, many tRNA-modifying proteins are present in only one of the three kingdoms of life (1).

    Thus, specific tRNA modifications that affect wobble base recognition and contribute to the base-stacking forces during translation can determine specific codon-context effects by adjacent synonymous codons on specific codon translation.

    Such effects of specific tRNA modifications on codon translation could account for the different codon pair biases observed in species that are evolutionarily distant (possessing different specific tRNA modifications) and also could account for the difficulty in expressing proteins in heterologous systems, i.e., expressing proteins from plant and mammalian systems in bacteria.

    The MiaA (i6A37) modification has recently been shown to affect mRNA translation in E. coli in a codon context-dependent manner, supporting our overall hypothesis (20).

    The translation of proline codons in the mgtL peptide transcript of Salmonella was recently shown to be affected by mutations defective in ribosomal proteins L27, L31, elongation factor EF-P, and TrmD, which catalyzes the m1G37 methylation of proline tRNA (21). Modification of tRNA species in E. coli also has been shown to vary with the growth phase of the cell (22).

    Specific codon-context effects could represent translation domains of life based on tRNA modifications.

    This triplet of triplets puts more constraints on the system. Where does this leave neo-Darwinism?

    and… “higher order genetic codes…”

    The tRNA modifications vary throughout the three kingdoms of life (3) and could affect codon–anticodon pairing.

    The differences in tRNA modifications could account for differences in synonymous codon biases and for the effects of codon context (the ability to translate specific triplet bases relative to specific neighboring codons) on translation among different species.

    Here, using in vivo genetic systems of Salmonella, we demonstrate that the translation of a specific codon depends on the nature of the codons flanking both the 5? and 3? sides of the translated codon, thus generating higher-order genetic codes for proteins that can include codon pairs and codon triplets.

    The effect of the flanking codons on the translation of a specific codon varies from insignificant to profound.

    It has been known for decades that highly expressed genes use highly biased codon pairs, which can vary from one species to the next. The speed of translation depends heavily on flanking codons (4).

    So cool. Regulatory factors surrounding other regulatory factors, surrounding a higher code, above the code 😉

    I came across this while searching another topic on genetic code. hat tip:
    https://evolutionnews.org/2017/04/genetic-code-complexity-just-tripled/

  152. 152
    OLV says:

    DATCG,

    That’s a very interesting paper. Thanks.
    Here’s a paper citing the one you quoted.
    https://doi.org/10.1080/15476286.2017.1403717

  153. 153
    uncommon_avles says:

    ET @ 124 , bill cole @126,
    Well, the initial cells weren’t complex. The structures became ‘complex’ over time by aggregation of processes – chemical concentrations, ion exchanges, structural agglomerations etc. When you look at the Slime Mold making ‘decisions’, it seems complex because you haven’t considered the chemo locomotion and pulses of plasmodium flow due to ‘target’ (oatmeal of different concentrations) concentrations. These are entirely environment based process. There is no need of any ID agent despite ‘500’ bit ID complexity (No idea how to calculated bits for slime mold . I am assuming it is above 500 bit based on gpuccio’s other examples. If not, let me know how you would calculate bits for this.)

    Origenes @ 148

    And, again, if we observe, post hoc of course, that only green bricks are hit, we have support for the idea that the shooter has seen those green bricks and has shot with aim.

    ..or the bullets were smart bullets seeking green color – akin to process guided by chemical, ionic , strutural or other physical processses as in the Slime mold example I gave earlier.

    gpuccio @ 150

    As you can see, the expected number varies from 2 to 5 times the sequence length, for sequences approximately between 2 and 100. Even for a sequence of 1000, the expected number is about 6-7 times the sequence length.

    I thought the argument always was it is difficult to change the sequence completely? So if even 4,000 sequence takes just 35,485 mutations 4000 Log(4000)+ Euler–Mascheroni constant x 4000 +(1/2) wouldn’t you agree that changing sequence completely is not difficult and you don’t need ID agent?

  154. 154
    gpuccio says:

    uncommon_avles #154:

    Your posts are very useful indeed, because they are a good repository of commom eoors of thought. I will answer those that are, in some way, new to this thread:

    a) You say:

    Well, the initial cells weren’t complex.

    ??? What “initial cells”? Examples, please. Possibly not mere fairy tales. facts.

    b) You say:

    (No idea how to calculated bits for slime mold . I am assuming it is above 500 bit based on gpuccio’s other examples. If not, let me know how you would calculate bits for this.)

    Slime molds are a polyphiletic group.

    I will give here some information about the genome of Dictyostelium discoideum, the model organism for cellular slime molds.

    First of all, what is it?

    From Wikipedia:

    “Dictyostelium discoideum is a species of soil-living amoeba belonging to the phylum Amoebozoa, infraphylum Mycetozoa. Commonly referred to as slime mold, D. discoideum is a eukaryote that transitions from a collection of unicellular amoebae into a multicellular slug and then into a fruiting body within its lifetime. Its unique asexual lifecycle consists of four stages: vegetative, aggregation, migration, and culmination. ”

    So, it is an eukaryote. Not a simple organism at all.

    The genome:

    Genome size: 34 Mb
    Chromosomes: 6
    Number of protein-coding genes: 12257 (Humans: 20000)
    Mean protein size: 580 AAs (Humans: 561)
    Number of genes with introns: 7996

    All data from:

    http://dictybase.org/Dicty_Inf.....stics.html

    An organism “without brain or nervous system”? OK, but:

    Using the social amoeba Dictyostelium to study the functions of proteins linked to neuronal ceroid lipofuscinosis

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5122030/

    Abstract
    Neuronal ceroid lipofuscinosis (NCL), also known as Batten disease, is a debilitating neurological disorder that affects both children and adults. Thirteen genetically distinct genes have been identified that when mutated, result in abnormal lysosomal function and an excessive accumulation of ceroid lipofuscin in neurons, as well as other cell types outside of the central nervous system. The NCL family of proteins is comprised of lysosomal enzymes (PPT1/CLN1, TPP1/CLN2, CTSD/CLN10, CTSF/CLN13), proteins that peripherally associate with membranes (DNAJC5/CLN4, KCTD7/CLN14), a soluble lysosomal protein (CLN5), a protein present in the secretory pathway (PGRN/CLN11), and several proteins that display different subcellular localizations (CLN3, CLN6, MFSD8/CLN7, CLN8, ATP13A2/CLN12). Unfortunately, the precise functions of many of the NCL proteins are still unclear, which has made targeted therapy development challenging. The social amoeba Dictyostelium discoideum has emerged as an excellent model system for studying the normal functions of proteins linked to human neurological disorders. Intriguingly, the genome of this eukaryotic soil microbe encodes homologs of 11 of the 13 known genes linked to NCL. The genetic tractability of the organism, combined with its unique life cycle, makes Dictyostelium an attractive model system for studying the functions of NCL proteins. Moreover, the ability of human NCL proteins to rescue gene-deficiency phenotypes in Dictyostelium suggests that the biological pathways regulating NCL protein function are likely conserved from Dictyostelium to human. In this review, I will discuss each of the NCL homologs in Dictyostelium in turn and describe how future studies can exploit the advantages of the system by testing new hypotheses that may ultimately lead to effective therapy options for this devastating and currently untreatable neurological disorder.

    Dictyostelium as a model system for studying human neurological disorders
    The social amoeba Dictyostelium discoideum is a fascinating microbe that has emerged as a valuable model organism for biomedical and human disease research. This model eukaryote, which has historically been used to study basic cell function and multicellular development, undergoes a 24-h asexual life cycle comprised of both single-cell and multicellular phases [1] (Fig. 1). As a result, it is an excellent system for studying a variety of cellular and developmental processes, including lysosome function and intracellular trafficking and signalling [2, 3]. In nature, Dictyostelium feeds and grows as single cells (Fig. 1). When prompted by starvation, cells undergo chemotactic aggregation towards cAMP to form a multicellular aggregate (i.e., a mound), which then undergoes a series of morphological changes to form a motile multicellular pseudoplasmodium, also referred to as a slug (Fig. 1). Cells within the slug then terminally differentiate into either stalk or spore to form a fruiting body [4] (Fig. 1). Unlike immortalized mammalian cells that have been removed from their respective tissues, Dictyostelium represents a true organism in the cellular state that retains all of its dynamic physiological processes. Moreover, the cellular processes and signalling pathways that regulate the behaviour of Dictyostelium cells are remarkably similar to those observed in metazoan cells, indicating that findings from Dictyostelium are highly likely to be translatable to more complex eukaryotic systems [5].
    Dictyostelium is recognized as an excellent model system for studying human neurological disorders, including epilepsy, lissencephaly, Parkinson’s disease, Alzheimer’s disease, and Huntington’s disease [6–10].

    Emphasis mine.

    An example of functional complexity in this “simple” eukaryotic cell? Of course, there are thousands.

    Just one for all. If you have read my previous OP abou the spliceosome:

    The spliceosome: a molecular machine that defies any non-design explanation.

    https://uncommondescent.com/intelligent-design/the-spliceosome-a-molecular-machine-that-defies-any-non-design-explanation/

    you will find an entire section about a long, complex and extremely conserved (in eukaryotes) protein, a highly functional component of the splieosome system: Prp8.

    Is that protein present in our “simple” slime mold?

    Of course it is.

    How much human conserved functional information does it show in the slime mold?

    And the answer is:

    3805 bits!

    (79% identity)

    I think that’s enough.

  155. 155
    gpuccio says:

    uncommon_avles #154 (continued):

    c) You say:

    I thought the argument always was it is difficult to change the sequence completely? So if even 4,000 sequence takes just 35,485 mutations 4000 Log(4000)+ Euler–Mascheroni constant x 4000 +(1/2) wouldn’t you agree that changing sequence completely is not difficult and you don’t need ID agent?

    I am always amazed at how little our interlocutors understand ID.

    This is a good example.

    The argument never was that it is difficult to change the sequence completely. It is not difficult at all, as I have clearly explained.

    The argument, of course, is that it is extremely difficult, empirically impossible, to find complex functional sequences in the ocean of non functional possible sequences, by random variation.

    To change a sequence (functional or not) into another non functional sequence is extremely easy.

    To change a sequence (functional or not) into the specific functional sequence of the beta chain of ATP synthase, or into the specific functional sequence of Prp8, is empirically impossible.

    Do you understand?

  156. 156
    Nonlin.org says:

    gpuccio@112

    Me: “Probabilities of randomness in biology are ridiculously low, therefore not even worth seriously discussing.”

    You: “Well, for me they are worth of a very serious discussion. Exactly because they “get extreme very quickly””

    Your reply makes no sense whatsoever. Can you explain?

    Also see Nonlin@139

  157. 157
    Nonlin.org says:

    Origenes@143

    Nonlin: You still don’t get it. I give you a 10000 trials as follows 101010…10. Can you say it’s “random”?

    1. Yes of course. If, after 10.000 trials, we have 50% “1”, then this is consistent with 1 and 0 production being random. Why is this so difficult for you?

    2. Question for Nonlin: if, after 10.000 trials, the outcome is 10% 1 (and 90% 0), what does that tell you about the “randomness” by which 1s and 0s are being produced?
    According to your claim “nothing”. Do you now understand that this is wrong?

    You’re completely lost.

    1. “It’s consistent” means absolutely nothing. Fact is, you cannot say FOR SURE. As you know, I designed that sequence, so no, it’s not random. I also designed a sequence that incorporates randomness: “flip coin and then reverse output for the other 9999 outputs”.

    2. Yes, “nothing” is the right answer – not from the outcome. Remember it’s a black box – you don’t know the stats – it might be a 10-face die with one 1 and nine 0. It can also be a loaded coin or even a fair coin and your trial is just one of many trials (hand picked or freak outcome).

    Of course, if you already know the system, you don’t learn anything new from one set of outputs, so the answer is still “nothing”.

    Now you get it?

  158. 158
    ET says:

    uncommon alves:

    Well, the initial cells weren’t complex.

    Evidence please.

    When you look at the Slime Mold making ‘decisions’, it seems complex because you haven’t considered the chemo locomotion and pulses of plasmodium flow due to ‘target’ (oatmeal of different concentrations) concentrations.

    Where did you get the slime mold from- Walmart?

    These are entirely environment based process.

    Cuz you say so? Really?

  159. 159
    Allan Keith says:

    Gpuccio,

    One important point is that this reasoning is about identifying correctly the targets, and not about computing the probabilities. Once we confirm that our targets are real targets, valid targets, then we can compute the probabilities. And decide if we can infer design (or aiming).

    But they are still “targets”. As such, they are pre-defined whether you admit it or not.

    Scenario two is obviously a fallacy. Scenario one is not. But the arguments often used by ID are to look at an extant structure and calculate the probability of it arising randomly, as if it arose in one step as a fully formed structure. Which no biologist is suggesting. Looking at the flagellum and trying to calculate the probability of it arising through known evolutionary mechanisms from some ancient starting point is improper use of probability. A proper use of probability would be to start from the same starting point and calculate the probability of any structure of equal complexity evolving through known evolutionary mechanisms. Frankly, I have no idea how this probability could be calculated, but that is what would have to be done to conclude that something we see today is too improbable to have happened in an’ undirected’ fashion.

    An analogy would be to start at the starting point of your ancient Roman ancestor (I am assuming that you are Italian). From that point, what is the probability that you, with your unique DNA sequence, would exist on April 20, 2018? Given all of the things that would have had to happen over the thousands of years for this to occur, the probability would be astronomically small. Yet, here you are. A proper use of probability, more akin to what happens with evolution, would be to start from the same starting point and estimate the probability that your ancestor would have a living descendant on April 20, 2018. This probability, obviously, is much higher. Not 1, but close to it.

  160. 160
    Origenes says:

    At TSZ, there is interest in the following paper:
    Random sequences rapidly evolve into de novo promoters, by A.H.Yona et al.

    The text contains optimistic passages: “These features make promoter evolution a promising avenue to consider how complex features can evolve.” and “Following these, the evolving populations highlighted that new promoters can often emerge directly by mutations, and not necessarily by genome rearrangements that copy an existing promoter. Substantial promoter activity can typically be achieved by a single mutation in a 100-base sequence, and can be further increased in a stepwise manner by additional mutations that improve similarity to canonical promoter elements.”

    The paper is about “short mutational distances” — it speaks of “only one mutation” and “substantial promoter activity can typically be achieved by a single mutation in a 100-base sequence …”
    Of course, ID-proponents, like Gpuccio and Behe, have pointed out, often, that such is within the reach of natural selection. So, the paper may not be relevant to ID.

    What baffles me is that “10% of random sequences can serve as active promoters” and for many others (60%) this function can “typically be achieved by a single mutation”. How can this be? Are promoters so simple that any ol’ sequence will do?
    Well, not according to the same paper: “The Escherichia coli promoter represents a complex sequence feature as it consists of different elements that act together to transcribe a gene. The RNA polymerase requires particular sequence elements for binding, and additional features, such as transcription factors and small ligands can further affect its activity.”
    Functional stuff, therefore, but how do we square this with a “short mutational distance” from any random sequence of 103 bases long?

  161. 161
    Origenes says:

    Nonlin @158

    Nonlin: “It’s consistent” means absolutely nothing. Fact is, you cannot say FOR SURE.

    This is where you go wrong. Given a large enough set, if some result is consistent with a random production, then this obviously MEANS (yes it does mean something) that we cannot exclude the possibility of random production — even though it does not provide a basis to be sure. On the other hand, given a large enough set, if a result is not consistent with a random production, then it tells us also something.
    Question to Nonlin: what can that be?

  162. 162
    ET says:

    Allan:

    Looking at the flagellum and trying to calculate the probability of it arising through known evolutionary mechanisms from some ancient starting point is improper use of probability.

    That is your opinion. And seeing tat you are not an authority no one will listen.

    And your equivocation is also duly noted.

    No one knows how to test the claim that undirected processes produced any bacterial flagellum. And given the paper waiting for two mutations it is clear that there isn’t enough time in the universe for undirected processes to do such a thing.

  163. 163
    Allan Keith says:

    ET,

    That is your opinion.

    And that of statisticians.

  164. 164
    LarTanner says:

    #163-

    That is your opinion. And seeing tat (sic) you are not an authority no one will listen.

    And there go 95 percent of the OPs and comments on UD.

  165. 165

    Nonlin @ 158: Actually, it is you who is completely lost. Not very impressive… even by a/mat standards.

  166. 166
    gpuccio says:

    Nonlin.org at #139 and #157:

    a) I always use a very explicit and clear definition of functional information. See here:

    Functional information defined

    https://uncommondescent.com/intelligent-design/functional-information-defined/

    Functional information is complex if it is beyond some appropriate threshold for the system. For a general system, 500 bits is appropriate as an universal threshold (as in Dembski).

    b) Again, you equal design with law. That is not correct, certainly not with our use of those terms. You cay:

    Yes, “The content of design in unpredictable, because it depends on the desires and cognitive abilities of the designer”, but the only way you can label something “designed” is to see that it is non-random, i.e. it follows certain rules – those imposed by the designer. In other words, design = regularity.

    No. The results of natural laws are regularities, but those results are not designed. The laws could be designed, but this is a cosmological argument. The biological argument of ID detects design inside the universe, not design of the universe. Inside the universe, the results of laws are not designed, because, once the laws exist, there is no need for any conscious intervention for them to operate.

    c) You conflate and confound different levels and kinds of functional information. You say:

    Example: Paley’s watch will have regular shapes and uniform materials that look different than a random pile of matter.

    But that is not the reason why we infer design for the watch. We can, at most, infer design for the parts, from that reasoning.

    We infer design for the watch for the specific configuration of parts that implement the function of measuring time.

    The individual parts, even if regular, would not allow any measure of time for the simple fact that they are regular. The function derives from the specific configuration of parts that implements the working machine, and that is not a regularity, but a functional specificity.

    d) You confound random configurations with designed objects. You say:

    You look at a sand dune or a sand garden – close-up it’s just “random” grains of sand, but wide-angle you see patterns that beg for an explanation.

    And the explanation is simple: those are patterns that are well explained by the action of weather and similar laws. They require no conscious intelligent design. Only the operation of existing laws on an existing system. No design here.

    e) You confound non detectable design with absence of design. You say:

    Can someone design a sand garden to look like a naturally occurring sand dune? Sure, and they’re indistinguishable (because they’re both designed if you ask me)!

    The definition of design is any process where conscious representations are the source for the form outputted to matter by the designer.

    A naturally occurring sand dune is not designed by any consious designer, unless you argue that everything that exists is designed, exactly as it is, by God. But again, that’s a philosophical argument, true or false that it may be. It is not a scientific argument.

    From a scientific point of view, we have objects that have been designed, because a conscious agent gave them the form they have, in time and space, starting from his conscious representations, and objects that are not designed, because that process never happened (at least in time and space).

    A designed thing can be undistinguishable from a non designed thing. If I design dunes so that they appear like natural dunes, and if I am good at it, nobody will be able to detect desoign from the result. But if somebody sees the process, design can be still affirmed.

    A lot of designed objects are such that we cannot detect design in them. The usual reason is that they are too simple, even if designed. We cannot infer design for simple configurations, even if the objects are really designed. These are the false negatives of desing inference, and there are a lot of them.

    Another possible reason is that the designed object, even if complex, is intentionally designed to appear similar to a non designed object. That’s the case of the dune garden. Even if designed, design is not detectable.

    e) I don’t agree with your concepts about randomness. You say:

    You say: “We must distinguish between usual randomness and quantum randomness.” – but this doesn’t make sense to me because “randomness” is ONLY a theoretical concept (like line, circle and point) – we can never determine something to be “random” – again, see: http://nonlin.org/random-abuse/ . Also, what we call “random” is never completely undetermined – all such phenomena have a deterministic element – at a minimum their statistical distribution and boundaries (no six face die will ever come up seven).

    Again, you use randomness as though it were a property of objects, to affirm that it is not (which is true), and then say that it is “only a concept” because we cannot determine if something is random (IOWs, again a property of objects).

    Have you read my comment #112?

    “Usual randomness just means that there is some system whose evolution is completely deterministic, but we can’t really describe its evolution in terms of necessity, because there are too many variables, or we simply don’t know everything that is implied.

    In some cases, such a system can be described with some success using an appropriate probability function. Probability functions are well defined mathematical objects, which can be useful in describing some real systems.

    A probabilistic description is certainly less precise then a necessity description, but when the second is not available, the first is the best we can do.

    A lot of empirical science uses succesfully probabilistic tools.”

    IOWs, randomness is simply our way to describe a deterministic system by a probability function. Therefore, all your reasonings about it being or not being a property of the objects are wrong. It is a property of our type of scientific description. In the described systems, everything is deterministic, but our description of the configurations is probabilistic.

    You say:

    “what we call “random” is never completely undetermined – all such phenomena have a deterministic element”

    But that makes no sense. All phenomena that we describe as random are completely deterministic. They don’t have “a deterministic element”. They are completely deterministic. (Except for quantum events).

    Of course we choose the probabilistic model so that it models correctly the system. Of course if a die has six possible configurations, we choose a distribution with six levels. For a coin, we chose a distribution with two levels. This is not “a deterministic element”. It is only a good way of choosing models.

    the determonostic elements in tossing a die or a coin are the laws of mechanichs, which determine exactly the result of each single event. But we cannot compute those results because we don’t know all the variables.

    Therefore, we descirbe the configurations by a probability distribution. And we can get very good results in that way.

    f) You don’t understand the difference between design and design detection. You say:

    You say: “An outcome that is non random is not necessarily designed.” How so? Provide example. If you think the sand dune is determined by “natural forces” and the “laws of physics”, then how do you know that it’s not ultimately designed?

    If you put objects of different density in water, some will float, some will go down, according to the density. This outcome is not random. And it is not designed.

    Again, I am not debating is natural laws are designed or not. That is a different issue.

    But, given natural laws, no conscious intelligent agent is acting on those objects to make them float or go down. The outcome is not random, and it is not designed. It is determoinistic, and it is simple enough so that we can describe it by necessity laws (the objects’density, and water’s density), without any need for a probabilistic description, which would not be equally precise.

    Regarding the dune, I will not infer design for it, because it has no complex functional information. If it was designed that way, I get a false negative.

    As explained, the design inference is made using extremely high thresholds of complexity (for example, 500 bits).

    The purpose for that is to have empirically no false positives, but the consequence is that we have a lot of false negatives. If you are familiar with the trade-off between sensitivity and specificity, you will understand that point. Here we need specificity, and we happily renounce sensitivity. Our purpose is to detect design correctly and safely in some objects, not to detect all designed objects.

    g) You apparently don’t understand the purpose of ID theory in biology. You say:

    Me: “Probabilities of randomness in biology are ridiculously low, therefore not even worth seriously discussing.”

    You: “Well, for me they are worth of a very serious discussion. Exactly because they “get extreme very quickly””

    Your reply makes no sense whatsoever. Can you explain?

    Yes, it makes very good sense.

    The probabilities of observing objects exhibiting functional complexity as a random result is some non design system “get extreme very quickly”, indeed exponentially, with the increase in the observed functional information.

    That’s what allows a safe design detection after some appropriate threshold is reached. Again, 500 bits for the general case.

    Therefore, those probabilities are “worth of a very serious discussion”, because they allow us to detect design in biological objects.

    I hope this answers your points.

  167. 167
    ET says:

    LarTanner-

    I don’t care about probability arguments for the simple reason is evolutionism doesn’t deserve a seat at that table. Evolutionists can’t figure put how to test their claims and that is more than enough to understand they have nothing.

    No one knows how to test the claim that undirected processes produced any bacterial flagellum. And given the paper waiting for two mutations it is clear that there isn’t enough time in the universe for undirected processes to do such a thing.

    You lose

  168. 168
    ET says:

    Allan-

    I don’t care about statisticians. They cannot help evolutionism.

  169. 169
    uncommon_avles says:

    ET @ 159
    A complex cell structure would have shown signs of complex organisms million of years ago ! We would be the dumbest organism if devolution happened. You need to get over this idea of ID agent creating everything by frontloading data and processes, if you want to understand science.

    Where did you get the slime mold from- Walmart?… Cuz you say so? Really?

    No. because REAL scientists carried out experiments, instead of just speculating about agents scurrying around and hurrying up ‘complex processes’. Please read the Nature’s link which was given earlier to understand how the slime mold worked.

    gpuccio @ 155
    No offence but this is exactly what I was referring to when I said ‘pseudo science’. A Slime Mold doesn’t have brain or nervous system thus it is entirely controlled by environmental factors. The biological processes are dependent on the environment. The metrics presented by you doesn’t make it any more ‘intelligent’ than they are. By putting up ‘bits’ metric you are just trying to project series of purely physical process as something which needs intelligence.

    gpuccio @ 156

    To change a sequence (functional or not) into the specific functional sequence of the beta chain of ATP synthase, or into the specific functional sequence of Prp8, is empirically impossible.

    Then how does it happen? I mean how would an external entity do it?

  170. 170
    ET says:

    uncommon alves:

    A complex cell structure would have shown signs of complex organisms million of years ago !

    Cuz you say so?

    You need to get over this idea of ID agent creating everything by frontloading data and processes, if you want to understand science.

    You don’t understand science and you don’t understand front-loading

    Please read the Nature’s link which was given earlier to understand how the slime mold worked.

    Evolutionism cannot account for the existence of slime molds.

  171. 171
    ET says:

    uncommon alves:

    A Slime Mold doesn’t have brain or nervous system thus it is entirely controlled by environmental factors.

    That doesn’t follow. A slime mold is made up of organisms- each an intelligent agency in their own right. They sense their environment and act accordingly.

  172. 172
    gpuccio says:

    Origenes at #161:

    It is a very good paper. If you read it carefully, you will see that it uses all the ID concepts, and it uses them correctly.

    You say:

    “Of course, ID-proponents, like Gpuccio and Behe, have pointed out, often, that such is within the reach of natural selection. So, the paper may not be relevant to ID.”

    It is relevant. Because it shows that ID concepts are correct, and that they can be applied correctly in experiments.

    Of course, the results are in perfect accord with ID theory. Simple results, that are in the range of RV + NS, can be definitely achieved by RV + NS. This is a very important point.

    Another very good point is that the authors reach well described results in their experiment, and then they compare those results to a good computational anaysis of the search space and target space. And the two kinds of results are perfectly compatible. I like very much this procedure.

    Of course, the result here is very simple, from the point of view of functional complexity. The “new” function (again, a function retrieval, the retrieval of the promoter) has a complexity of 2 bits (a single nucleotide substitution). And the optimized function is reache by one additional 2 bits mutation.

    you say:

    “What baffles me is that “10% of random sequences can serve as active promoters” and for many others (60%) this function can “typically be achieved by a single mutation”. How can this be? Are promoters so simple that any ol’ sequence will do?”

    Yes, they are, according to these results. However, it was already known that promoters are rather simple, even if this is probably the first accurate measure of how simple they are.

    However, as explained very well in the paper, the important functional element is the correct balance between useful and deleterious promoters, the “trade-off” well discussed in the paper.

    You say:

    “Well, not according to the same paper: “The Escherichia coli promoter represents a complex sequence feature as it consists of different elements that act together to transcribe a gene. The RNA polymerase requires particular sequence elements for binding, and additional features, such as transcription factors and small ligands can further affect its activity.”
    Functional stuff, therefore, but how do we square this with a “short mutational distance” from any random sequence of 103 bases long?”

    It’s not so difficult. If you read the initial description of the promoter sequence, you will see that the funtional nucleotides are only a few. It’s “complex”, but not so much. Moreover, we are discussing nucleotides here, not AAs. The alphabet is base four. Each position is 2 bits.

    Moreover, even those functional elements are not extremely specific, as shown by the results. Therefore, the whole functional complexity of a single promoter of this type is probably very low, maybe about 20 bits or less.

    That is completely in the range of RV, and the following optimization by 1 additional mutation is of course completely in the range of NS.

    An important question here is: how do these results relate to the many times quoted paper “Waiting for two mutations”? Which was about a similar problem.

    Even if I have not done the math in detail, I think they are in perfect accord.

    The main difference is that the “Waiting for two mutations” paper is about what should happen in a natural setting. It models not only the probability for the mutations, but also the probabilites of fixation. That is very important. Please, see also my comment #144, the final part.

    In this paper, fixation is not considered. They only look at the appearance, and further optimization, of the function. Which is perfectly fine, given the purposes of the paper. But it also explains the differences with the other paper.

  173. 173
    gpuccio says:

    uncommon_avles:

    “Then how does it happen? I mean how would an external entity do it?”

    In the same way tha we design our artifacts: the conscious intelligent designer inputs specific configurations into the object.

    The main possible mechanism is guided variation. Transposons, IMO, are a good candidate as design tools.

  174. 174
    gpuccio says:

    Allan Keith at #160:

    But they are still “targets”. As such, they are pre-defined whether you admit it or not.

    They are “pre-existing”, not “pre-defined”. And I have not only admitted that idea: I have definitely defended it!

    The existence of complex configurations that allow the existence of ATP synthase is a consequence of biochemical laws. In principle, such a mcachine couls simply be impossible. But that is not the case. It can be built. But of course you need a lot of specific crafting to get it.

    Not all machines are possible. We can conceive of a machine that allows us to go back in time. Maybe it is possible, maybe it isn’t.

    But if we observe one, working, then we know that it is possible. We know that it is a real target.

    And if we see that it needs complex hardware to work, we know that it is a real target that is functionally complex.

    You say:

    Scenario two is obviously a fallacy. Scenario one is not. But the arguments often used by ID are to look at an extant structure and calculate the probability of it arising randomly, as if it arose in one step as a fully formed structure. Which no biologist is suggesting.

    And neither is any IDist suggesting that!

    Again the same error.

    It is not important at all if it arose in one step or in 1000 successive steps.

    The point is that, if the function is complex, it will not work unitl its specific bits are all there. They can arrive there in steps or not, however you like. The simple point is that, until they are there, the function is not there. And therefore, it cannot be selected. You cannot select something that does not exist.

    You say:

    Looking at the flagellum and trying to calculate the probability of it arising through known evolutionary mechanisms from some ancient starting point is improper use of probability.

    Only if you do that improperly. In principle, it’s perfectly feasible.

    And however, the flagellum is about IC, and the computation is more difficult. Let’s stick to the alpha and beta chains of ATP synthase, OK?

    You say:

    A proper use of probability would be to start from the same starting point and calculate the probability of any structure of equal complexity evolving through known evolutionary mechanisms.

    First, you always forget: any structure of equal complexity that implements a naturally selectable function.

    I have discussed this objection both in the OP and in the thread. Please, see KF’s comment at #89, and my comment at #90 (first part), that I quote here for your convenience:

    You are perfectly right: the important point is not the absence of other needles (that in principle cannot be excluded, and in many cases can be proved), but the fact that they are still needles in a haystack.

    IOWs, the existence of alternative complex solutions does not have any relevant effect on the computation of the improbability of one individual needle. It’s the functional specificity of each individual needle that counts.

    That’s what I have tried to argue with my discourse about time measuring devices. Evoking only ridiculous answers from DNA_Jock, who probably really believes that the existence of water clocks and candle clocks makes the design inference for a watch a TSS fallacy! Indeed, he seems so certain that we are “painting” the function of measuring time around the random object that is our watch!

    Any solution that is highly specific is designed. We have absolutely no counter-examples in the whole known universe.

    IOWs, to infer design for a watch, there is no need at all to consider all other possible machines of similar complexity that could exist, not even of those that could in principle measure time: we just need to measure the complexity of the watch, and recognize that it is simply too big to be compatible with any random origin.

    We are not discussing small improbabilities here. We are discussing extreme improbabilities, beyond any possible doubt.

    You say:

    Frankly, I have no idea how this probability could be calculated, but that is what would have to be done to conclude that something we see today is too improbable to have happened in an’ undirected’ fashion.

    Wrong. see before. You don’t understand that we are discussing an empirical inference here, an inference to the best explanation, that is completely warranted in this case.

    There is no need to compute the exact probability. We just have to realize that the design explantion is the best explanation, and that the idea that an unimaginable number of complex solutions do exist, against any reasonable or empirical support for the idea itself, is just ad hoc resoning motivated by faith and ideology.

    Se also my comment #122 here.

    You say:

    An analogy would be to start at the starting point of your ancient Roman ancestor (I am assuming that you are Italian).

    I am.

    You say:

    From that point, what is the probability that you, with your unique DNA sequence, would exist on April 20, 2018? Given all of the things that would have had to happen over the thousands of years for this to occur, the probability would be astronomically small. Yet, here you are. A proper use of probability, more akin to what happens with evolution, would be to start from the same starting point and estimate the probability that your ancestor would have a living descendant on April 20, 2018. This probability, obviously, is much higher. Not 1, but close to it.

    No! Not the infamous deck of card fallacy again!

    See #35, #52, and especially #859 in the Ubiquitin thread, in answer to you!

    This is not “a proper use of probability”. It’s a silly use of probability, and the “argument” is a silly fallacy.

    If you are sincere, please consider carefully my arguments in my previous answer to you about that in the Ubiquitin thread.

    If you are only joking, do as you like.

  175. 175
    LocalMinimum says:

    AK @ 160:

    But the arguments often used by ID are to look at an extant structure and calculate the probability of it arising randomly, as if it arose in one step as a fully formed structure.

    When producing a structure out of functional, selected for substructures, you have to modify the substructures from independent functionality to properly networked dependent functionality, as well as develop the structure of the intersection. Naturally, this has to be done in a single step, otherwise it will be selected against by the loss of the selected for functions (critically if the loss of function is fatal).

    So, you essentially need to not only make a new set of structures out of old, you have to make the previously unselected for “glue” networking structure as well, which itself is going to be even more complex if you’re making all the pre-existing structures “plug-n-play” biology. And make them all land in the right places, right orientations, etc. (configuring your bag of parts isn’t free, either)

    Relying on previously non-functional, unselected for components of the composite system lying around is no better than expecting it to arise all at once (if you don’t constrain the range of the random mutation function, which evolutionists don’t, because it helps their case not to and could even constrain them out of a job). You’re still expecting to have the right n number of bits worth of structure on hand just because.

    Also, the chance for continuity of a self-replicating mechanism is calculated the same as the chance for the discontinuity of that self-replicating mechanism in the direction of increasing functionality? Well, multiplied with some coefficient if you’re just assuming upward evolution happens, and at a rate you can draw a line through. But when upward evolution happening is at issue, your argument is circular.

  176. 176
    Allan Keith says:

    LocalMinium,

    When producing a structure out of functional, selected for substructures, you have to modify the substructures from independent functionality to properly networked dependent functionality, as well as develop the structure of the intersection. Naturally, this has to be done in a single step, otherwise it will be selected against by the loss of the selected for functions (critically if the loss of function is fatal).

    Not if the individual steps are equally or more fit than the original structures. For example, the difference between an injectisome and a flagellum is not that great. The change would require the loss of function as an injectisome but the function of a flagellum may more than offset this loss of function.

  177. 177
    ET says:

    Allan:

    For example, the difference between an injectisome and a flagellum is not that great.

    They are both IC. And they both require different command and control.

    How many specific mutations would it take to evolve a flagellum for your injectisome? Do you have any idea if such a transformation can be had via genetic changes?

  178. 178
    ET says:

    The only reason probability arguments are used is because there isn’t anything else. Meaning there aren’t any experiments to call on. There isn’t even a methodology to test the claims.

    What I don’t understand is why evos don’t think that is a problem.

  179. 179
    LocalMinimum says:

    AK @ 177:

    I assumed we were speaking of IC structures, because that’s the mode of the thread. If that assumption wasn’t shared, please excuse me.

    When producing IC structures out of already functioning components, you run out of those single steps. Obviously, if the whole of the system can be parted neatly into independently useful components, it’s not IC.

    Every bit of the structure that can’t operate independently must then be produced and/or the components must be modified to interface and operate within the greater system, as well as configured – positioned and sequenced in construction order, etc. within a single step.

  180. 180
    gpuccio says:

    Origenes:

    I would like to point at a few passages from the promoter paper, to show how the authors are very correctly using and applying the main concepts of ID theory.

    To systematically study the evolution of de novo promoters, one should start from non-functional sequences.

    For such genomes, random sequences can serve as a null model when testing for functionality without introducing biases or confounding factors due to deviating from the natural GC content of the studied genome.

    The number of mutations needed in order to change a random sequence into a functional promoter is not clear. Especially in experimental and quantitative terms, the question is how many mutations does one need in order to make a functional promoter, starting from a random sequence of a specific length? This question can be addressed directly by experimental evolution.

    Substantial promoter activity can typically be achieved by a single mutation in a 100-base sequence, and can be further increased in a stepwise manner by additional mutations that improve similarity to canonical promoter elements. We therefore find a remarkable flexibility in the transcription network on the one hand, and a tradeoff of low specificity on the other hand, with interesting implications for the design principles of genome evolution.

    Emphasis mine.

    The emphasis on the “low specificity” is important. This is a low specificity result, and it certainly gives some flexibility. But flexibility requires control. It is, as correctly stated, a “tradeoff”:

    Tuning the promoter recognition machinery to such a low specificity so that one mutation is often sufficient to induce substantial expression is crucial for the ability to evolve de novo promoters. If two or more mutations were needed in order to create a promoter, cells would face a much greater fitness-landscape barrier that would drastically reduce their ability to evolve the promoters de novo.

    And:

    Setting a low threshold for functionality, on one hand, while eliminating the undesired off-target instances on the other hand, makes a system where new beneficial traits are highly accessible without enduring the low-specificity tradeoffs.

    Emphasis mine.

    To broadly represent the non-functional sequence space, we used random sequences (generated by a computer) with equal probabilities for all four bases

    This experimental observation was therefore consistent with the expectation that a random sequence is unlikely to be a functional promoter.

    This is correct. The evolution of the promoter required at least one specific mutation, and that required many passages, and therefore some probabilistic resources (which, unfortunately, cannot be exactly computed from the data in the paper: this is the only minor flaw in it, IMO). IOWs, they had to test quite a number of states, before finding the specific functional one nucleotide mutations.

    As said before, it is a complex result, but not very complex at all. Perfectly in the easy range of a bacterial system. The basic function has a 2 bit complexity, but even that simple result requires some probabilistic resources. e must remember that the rate of mutations is about 10^-9 per replication per site.

    Each mutation was inserted back into its relevant ancestral strain, thus confirming that the evolved ability to utilize lactose is due to the observed mutations.

    A very important control, that is rarely found in similar experiments. Very good! 🙂

    Next, we aimed to determine the mechanism by which these mutations induced de novo expression from a random sequence.

    Correct. Understanding the mechanisms is fundamental! 🙂

    The lab evolution results from RandSeq1, 2, and 3 indicate that de novo promoters are highly accessible evolutionarily, as a single mutation created a promoter motif that enabled growth on lactose, suggesting that a sequence space of ~100 bases might be sufficient for evolution to find an active promoter with one mutational step.

    This is perfect ID logic. This is the way to test hypotheses about functional information and functional landscapes.

    The only point that could be misleading is the freequent reference to “a sequence space of ~100 bases”. This is technically correct, because they used sequences 103 nucleotides long, but it is misleading, because the functionally relevant sequences are much shorter, corresponding to the consensus sequence, essentially 6 + 6 nucleotides, and maybe a few more at other positions.

    So, the real sequence space is essentially the sequence space of 12 nucleotides, 4^12, 16.8 million states, 24 bits.

    We must remember that the essential function of this consensus sequence is to allow the binding of the RNA polymerase.

    These random sequences (generated in Matlab) were used as starting sequences for promoter evolution because they represent the non-functional sequence space, without biases, as they contain no information.

    Emphasis mine.

    This is very interesting. Here, they are using the concept of functional information, without even specifying it! Of course they are speaking of funtional information, when they say: “they contain no information”. 🙂

  181. 181
    OLV says:

    gpuccio,
    Is the promoter paper about what is called microevolution?
    Thank you.
    Óscar Luis

  182. 182
    OLV says:

    gpuccio,

    Regarding the last piece of text you quoted:

    “…they represent the non-functional sequence space, without biases, as they contain no information.“

    Can we say that any random sequence contains certain amount of the so-called Shannon information?

    Does the expression “the nonfunctional” in the quoted text serve as an implicit qualifier to the last word “information” in the same sentence”

    Thank you.
    Oscar Luis

  183. 183
    Origenes says:

    GPuccio @173 @181

    Thank you for you comments on the paper Random sequences rapidly evolve into de novo promoters, by A.H.Yona et al.

    GP:
    The only point that could be misleading is the freequent reference to “a sequence space of ~100 bases”. This is technically correct, because they used sequences 103 nucleotides long, but it is misleading, because the functionally relevant sequences are much shorter, corresponding to the consensus sequence, essentially 6 + 6 nucleotides, and maybe a few more at other positions.

    So, the real sequence space is essentially the sequence space of 12 nucleotides, 4^12, 16.8 million states, 24 bits.

    This was most helpful. As you have often argued in your OP’s, this is well within the reach of RV & NS.

  184. 184
    gpuccio says:

    OLV:

    Is the promoter paper about what is called microevolution?

    Yes, definitely.

    A functional transition of 1 nucleotide + 1 nucleotide optimization is much simpler than, say, penicillin resistance, where you need 1 AA + a few AA optimization.

    One AA is 4.3 bits of information, while one nucleotide is only 2 bits.

    So, this is really a simple transition.

    Can we say that any random sequence contains certain amount of the so-called Shannon information?

    Yes, of course. That’s why I say that they are implicitly speaking of functional information: the set of functional sequences, that will implement the function of providing a promoter. This is pure ID theory.

  185. 185
    gpuccio says:

    Origenes:

    This was most helpful. As you have often argued in your OP’s, this is well within the reach of RV & NS.

    Yes, it is! As I have argued at #185, it’s much easier than penicillin resistance.

    Functions linked to nucleotide sequences are in base four. Therefore the combinatorics is less extreme, as related to sequence length.

    12 nucleotides is a search space of 24 bits.

    12 AAs is a search space of 52 bits.

    That’s a huge difference!

    By the way, have you noticed that the paper is about computing, by experiment and math, the probabilities of generating one specific functional target, even if a simple one?

    Is the paper fatally flawed as an example of TSS fallacy? Didn’t the reviewers understand that? 🙂

  186. 186
    Nonlin.org says:

    Origenes@162

    Nonlin: “It’s consistent” means absolutely nothing. Fact is, you cannot say FOR SURE.

    This is where you go wrong. Given a large enough set, if some result is consistent with a random production, then this obviously MEANS (yes it does mean something) that we cannot exclude the possibility of random production — even though it does not provide a basis to be sure. On the other hand, given a large enough set, if a result is not consistent with a random production, then it tells us also something.
    Question to Nonlin: what can that be?

    So you agree one “cannot say FOR SURE” (rephrased as “cannot exclude the possibility of random production”) but then claim I am wrong? Where’s your Logic, amigo?

    Your problem is that Darwinistas illogically claim “randomness” left and right when in fact one “cannot say FOR SURE” is the most you should claim.

    And when you see a pattern such as all biological patterns, you can calculate the probability of that pattern if it were random. Guess what? Those probabilities are almost always zero indicating non-randomness (that’s why this OP discussing probabilities makes zero sense).

    Is anyone surprised that DNA / kidney / flowers / etc. shape is non-random? How can they “arise” from “random” mutations? Total nonsense.

    Truth Will Set You FreeApril @ 166
    You can’t be taken seriously with unsubstantiated claims.

  187. 187
    DATCG says:

    OLV @ 153.

    Yep, saw that citing, thanks 🙂

  188. 188
    DATCG says:

    Gpuccio @167,

    You are to be commended for your patience in explanation once again.

  189. 189
    DATCG says:

    Origenes,
    interesting paper, thanks for the link.

    I think the “trade-off” they speak of is flexibility in that if it was to specific, gene expression might be to limited for bacteria?

    And then they state:

    Further work will be necessary to determine whether this flexibility in transcription is also present in higher-organisms and in other recognition processes.

    That might be an interesting look. My initial thought is “Flexibility in transcription” for gene expression is conditional and Context Dependent for higher-organisms. More regulatory control limitations than bacteria.

    I was thinking of color for polar bears.
    But a quick search turned up “different fur pigment” for rabbit. Himalayan rabbits! 🙂 ha! Along with other examples of promoter and transcription…

    http://ib.bioninja.com.au/high.....ssion.html

    Control Elements

    The DNA sequences that regulatory proteins bind to are called control elements

    Some control elements are located close to the promoter (proximal elements) while others are more distant (distal elements)
    Regulatory proteins typically bind to distal control elements, whereas transcription factors usually bind to proximal elements
    Most genes have multiple control elements and hence gene expression is a tightly controlled and coordinated process

    The environment of a cell and of an organism has an impact on gene expression

    Changes in the external or internal environment can result in changes to gene expression patterns

    Chemical signals within the cell can trigger changes in levels of regulatory proteins or transcription factors in response to stimuli.

    This allows gene expression to change in response to alterations in intracellular and extracellular conditions

    A prescriptive adaptability flexible when needed but still under control.

    There are a number of examples of organisms changing their gene expression patterns in response to environmental changes:

    Hydrangeas change colour depending on the pH of the soil (acidic soil = blue flower ; alkaline soil = pink flower)
    The Himalayan rabbit produces a different fur pigment depending on the temperature (>35ÂşC = white fur ; <30ÂşC = black fur)

    Humans produce different amounts of melanin (skin pigment) depending on light exposure

    Certain species of fish, reptile and amphibian can even change gender in response to social cues (e.g. mate availability)

    Maybe Gpuccio can add to this if I’m going in wrong direction. Obviously, there’s a difference from bacteria to flowers to rabbit’s survival in the cold. But I’m thinking in Eukaryotes, the regulatory system is more tightly controlled than in bacteria?

    .

  190. 190
    gpuccio says:

    DATCG and Origenes:

    I think the tradeoff they are referring to is between:

    a) Specificity

    and

    b) Evolvability

    Keeping a low specificity at the level of the promoter make it easier to evolve de novo promoters at new sites by RV, but at the same time makes the function less specific, and therefore allows for an easier generation of “undesired targets”, that have to be eliminated:

    The rapid rate at which new adaptive traits appear in nature is not always anticipated, and the mechanisms underlying this rapid pace are not always clear. As part of the effort to reveal such mechanisms59, our study suggests that the transcription machinery was tuned to be “probably approximately correct”60 as means to rapidly evolve de novo promoters. Setting a low threshold for functionality, on one hand, while eliminating the undesired off-target instances on the other hand, makes a system where new beneficial traits are highly accessible without enduring the low-specificity tradeoffs. Further work will be necessary to determine whether and how similar principles affect the regulatory network and protein–protein interaction network in bacteria as well as in higher organisms.

    Moreover, the authors are not suggesting in any way that transcription regulation is not complex in bacteria. They are just saying that the gross role of the promoter as a site for binding of RNA polymerase and transcription initiation is rather simple.

    Regulation of transcription takes place at many other complex levels: TFs, enhancers, chromatin states, and so on.

    Moreover, they also recognize that the wildtype promoter has probably a more complex role in regulation, not present in the randomly evolved form:

    Despite generating expression levels similar to the WT lac promoter, the promoters evolved in our library are of very low complexity, as most of the activating mutations involved no additional factors but the two basic promoter motifs. Although the evolved promoters likely have no regulation, we hypothesize that such crude promoters might play an important role in the evolution of the transcriptional network, as newly activated genes do not necessarily require the regulated/induced expression in order to confer significant advantage. Furthermore, such stripped down promoters can serve as an evolutionary stepping-stone until regulation evolves, perhaps also by stepwise point mutations.

    Emphasis mine.

    So, the complexity of the wildtype is certainly higher, because it has added regulatory functions. The crude promoters that evolved here can only implement the binding site for RNA polymerase.

    And please, note the “perhaps” in the last sentence. The authors are certainly not fools! 🙂

  191. 191
    DATCG says:

    And to point out, promoters upstream of the gene – intergenic regions – once known as “JUNK” DNA and still referenced as such.

    I think these intergenic regions will continue to turn up functional control elements.

    So, expressions can be turned on/off rapidly by a single element = functionally designed conditional control elements.

    These type of simple, conditional control elements are utilized constantly for programming of variable designed outcomes based upon variable inputs. A single byte in a conditional table can kick off a different “expression” or subroutine and outcome.

  192. 192
    DATCG says:

    Gpuccio @191,

    Thanks for follow-up.

    Evolvability:

    The last sentence in your quote, referenced in the paper is interesting as I’ve always thought regulatory controls need to be in place, not after the fact catching up…

    Furthermore, such stripped down promoters can serve as an evolutionary stepping-stone until regulation evolves, perhaps also by stepwise point mutations.

    noting: “perhaps” 😉

    I have to check back in later. Great OP as usual.

    I think your eplanations on TSS are clear, and shooting down the Deck of Cards fallacy, excellent in #859 of Ubiquitin OP. I’m not sure there’s much more you can say to someone on that subject, if they cannot comprehend your well written explanation.

    But evidently it has to be continually shot down.

    .

  193. 193
    Origenes says:

    GPuccio DATCG @

    “Random sequences rapidly evolve into de novo promoters”. This title is misleading, since, as it turns out, only a minor part of the random sequences (12 nucleotides; see #181) is being ‘evolved’.
    And I suspect that it is the title that got the participants at TSZ going.

    Perhaps it is important to point out a folklore among scientists who are sympathetic to evolution, namely, to choose misleading titles. This long-standing tradition was set off by Darwin when he opted for the title “On the origin of species.”

    A small anecdote: more than a decade ago, when I was blissfully unaware of the existence of UD, I had convinced myself that there could not exist a step-by-step evolutionary explanation for snake fangs. First a venom gland and no delivery system or vice versa? I was sure that this was impossible.
    Then, in 2008, I was hit with the following (misleading) titles:

    Snake-Fang Evolution Mystery Solved — “Major Surprise” (National Geographic)

    &

    Evolving Snake Fangs, by PZ Myers at Panda’s Thumb.
    PZMyers: I keep saying this to everyone: if you want to understand the origin of novel morphological features in multicellular organisms, you have to look at their development.

    Both articles are based on the paper “Evolutionary origin and development of snake fangs”, by F.J.Vonk et al, 2008. And, yes, this paper also carries a misleading title, simply because the paper is not about the evolutionary origin of snake fangs! Nowhere in the paper is an attempt to describe a step-by-step evolutionary process how a snake fang could evolve.
    So what is the paper about?
    Livescience.com explains in an article with the misleading title “How Snakes Got Their Fangs”.

    To figure out how both types of snake fangs evolved from non-fanged species [<<< extremely misleading!], Vonk and his colleagues looked at fang development in 96 embryos from eight living snake species.

    The team’s analyses showed that the front and rear fangs develop from a separate teeth-forming tissue at the back of the upper jaw.
    “The uncoupled rear part of the teeth-forming tissue evolved in close association with the venom gland, thereafter forming the fang-gland complex,” Vonk said. “The uncoupling allowed this to happen, because the rear part of the teeth-forming tissue did not have constraints anymore from the front part.”

    Aha! That’s all folks. No step-by-step explanation of the snake venom system at all. Zero. Zip. The ‘explanation’ by Vonk is that “it was allowed” …

    – – – – –
    Ironically in the same article the writer of the paper got completely carried away:

    “The snake venom system is one of the most advanced bioweapon systems in the natural world,” said lead researcher Freek Vonk of Leiden University in the Netherlands. “There is not a comparable structure as advanced, as sophisticated, as for example a rattlesnake fang and venom gland.”

  194. 194
    gpuccio says:

    Origenes:

    Yes, the title is probably misleading, but in general I would commend the article as a very good example of research.

    It asks the right questions, and gives the right answers. It is precise in the explanation of the problem, in the descritpion of data and results, and in the discussion.

    As already said, I would have appreciated some more details about the procedures of population treatment and expansion, and maybe artificial selection, so that the number of total mutations and the mutation rate could be more explicitly considered. However, I will read again the whole paper with more time and attention to see if those data can be gathered from what they say.

    All considered, this is a very good paper. I think I will quote it often in my future discussions.

    As for TSZers, I think they get excited for the wrong things all the time! 🙂

  195. 195
    gpuccio says:

    Origenes and DATCG:

    Here are a few papers about transcriptional regulation in prokaryotes.

    This is about the role of TFs (activators and repressors):

    An overview on transcriptional regulators in Streptomyces.

    https://www.ncbi.nlm.nih.gov/pubmed/26093238

    Abstract:

    Streptomyces are Gram-positive microorganisms able to adapt and respond to different environmental conditions. It is the largest genus of Actinobacteria comprising over 900 species. During their lifetime, these microorganisms are able to differentiate, produce aerial mycelia and secondary metabolites. All of these processes are controlled by subtle and precise regulatory systems. Regulation at the transcriptional initiation level is probably the most common for metabolic adaptation in bacteria. In this mechanism, the major players are proteins named transcription factors (TFs), capable of binding DNA in order to repress or activate the transcription of specific genes. Some of the TFs exert their action just like activators or repressors, whereas others can function in both manners, depending on the target promoter. Generally, TFs achieve their effects by using one- or two-component systems, linking a specific type of environmental stimulus to a transcriptional response. After DNA sequencing, many streptomycetes have been found to have chromosomes ranging between 6 and 12Mb in size, with high GC content (around 70%). They encode for approximately 7000 to 10,000 genes, 50 to 100 pseudogenes and a large set (around 12% of the total chromosome) of regulatory genes, organized in networks, controlling gene expression in these bacteria. Among the sequenced streptomycetes reported up to now, the number of transcription factors ranges from 471 to 1101. Among these, 315 to 691 correspond to transcriptional regulators and 31 to 76 are sigma factors. The aim of this work is to give a state of the art overview on transcription factors in the genus Streptomyces.

    This is extremely interesting, about the role of DNA loops:

    DNA Looping in Prokaryotes: Experimental and Theoretical Approaches

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3591992/

    ABSTRACT:

    Transcriptional regulation is at the heart of biological functions such as adaptation to a changing environment or to new carbon sources. One of the mechanisms which has been found to modulate transcription, either positively (activation) or negatively (repression), involves the formation of DNA loops. A DNA loop occurs when a protein or a complex of proteins simultaneously binds to two different sites on DNA with looping out of the intervening DNA. This simple mechanism is central to the regulation of several operons in the genome of the bacterium Escherichia coli, like the lac operon, one of the paradigms of genetic regulation. The aim of this review is to gather and discuss concepts and ideas from experimental biology and theoretical physics concerning DNA looping in genetic regulation. We first describe experimental techniques designed to show the formation of a DNA loop. We then present the benefits that can or could be derived from a mechanism involving DNA looping. Some of these are already experimentally proven, but others are theoretical predictions and merit experimental investigation. Then, we try to identify other genetic systems that could be regulated by a DNA looping mechanism in the genome of Escherichia coli. We found many operons that, according to our set of criteria, have a good chance to be regulated with a DNA loop. Finally, we discuss the proposition recently made by both biologists and physicists that this mechanism could also act at the genomic scale and play a crucial role in the spatial organization of genomes.

    And, finally, regulatory RNAs:

    When eukaryotes and prokaryotes look alike: the case of regulatory RNAs

    https://academic.oup.com/femsre/article-abstract/41/5/624/4080139?redirectedFrom=fulltext

    Abstract:

    The discovery that all living entities express many RNAs beyond mRNAs, tRNAs and rRNAs has been a surprise in the past two decades. In fact, regulatory RNAs (regRNAs) are plentiful, and we report stunning parallels between their mechanisms and functions in prokaryotes and eukaryotes. For instance, prokaryotic CRISPR (clustered regularly interspaced short palindromic repeats) defense systems are functional analogs to eukaryotic RNA interference processes that preserve the cell against foreign nucleic acid elements. Regulatory RNAs shape the genome in many ways: by controlling mobile element transposition in both domains, via regulation of plasmid counts in prokaryotes, or by directing epigenetic modifications of DNA and associated proteins in eukaryotes. RegRNAs control gene expression extensively at transcriptional and post-transcriptional levels, with crucial roles in fine-tuning cell environmental responses, including intercellular interactions. Although the lengths, structures and outcomes of the regRNAs in all life kingdoms are disparate, they act through similar patterns: by guiding effectors to target molecules or by sequestering macromolecules to hamper their functions. In addition, their biogenesis processes have a lot in common. This unifying vision of regRNAs in all living cells from bacteria to humans points to the possibility of fruitful exchanges between fundamental and applied research in both domains.

    Very interesting. 🙂

  196. 196
    Nonlin.org says:

    gpuccio@167

    You can’t fight Darwinism while uncritically accepting their nonsensical myths. Since when is science separate from philosophy/religion?!? There’s a very good reason why Newton wrote “Philosophiae Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy)” and why current advanced degrees in science are called PhD. Furthermore, here is logical proof that this separation is artificial and nonsensical: http://nonlin.org/philosophy-religion-and-science/

    Science = Observation + Assumptions, Facts Selection, Extrapolations, Interpretations…
    Assumptions, Facts Selection, Extrapolations, Interpretations… = Sum of Axiomatic Beliefs
    Sum of Axiomatic Beliefs = Religion …therefore,
    Science = Observation + Religion

    a) Interesting, but your FSI definition seems dependent on a particular intelligent agent and a very specific function. And “complex” is just having FSI above the threshold? Hmm, what threshold, and what’s the point of all this? The answer is probably hidden somewhere in your many posts and comments, but that’s not very helpful. See f) questions too.
    b) See first paragraph above. Nature doesn’t come with laws – they are written by humans based on what we observe. In addition, we continue to rewrite “the laws” based on new observations. If you disagree with “design = regularity” you should explain how you differentiate between the two. Just because “once the laws exist, there is no need for any conscious intervention for them to operate”, doesn’t mean “the results of laws are not designed”. If I design and set up a widget making machine, you better believe those widgets have been designed by me – the creator of the machine that makes them under my laws.
    c) The watch can be nonfunctional (as in watch sculptures) and will still show design. Again, the regularities of the shapes and materials is enough.
    d) See b) Also, “action of weather” is just an intermediate step, not the ultimate source of the patterns. And the point was: if you see a pattern you know for sure it’s not just random – there’s a regularity behind it which is indistinguishable from design. Even chaos theory patterns are the result of a designed system: https://en.wikipedia.org/wiki/Chaos_theory. I am pretty sure no one can explain why – based on the known laws of physics – dunes, hurricanes, galaxies, etc. have these precise patterns and not other patterns.
    e) We’re assuming the designer is invisible and you can only determine design based on the output. Yes, something can be designed to look random, but I don’t think we’re concerned with that scenario. Fact is, we see a lot of patterns that are clearly nonrandom especially in biology. In fact I can’t think of anything in nature that can be attributed 100% to randomness. Even the atmospheric noise used as random generator has a deterministic component in its statistics and boundaries.

    Yes, I got your comment 112. You refer to chaos theory. That’s compatible with my “randomness is ONLY a theoretical concept”, so why do you disagree, and how would you prove it wrong? I don’t agree with “all phenomena that we describe as random are completely deterministic”. How would you know? And if quantum events are an exception, and of course all particles are subject to quantum events, then of course all systems are NOT deterministic.
    f) Metal boats don’t sink, nor do the Gerridae insects, etc. Wood doesn’t sink either because of its designed structure. Shape matters! Your example fails not just because of these counterexamples but because it substitutes the intrinsic properties of materials for objects. But materials are not objects. You use “complex” and “functional” again, but (without reading all your posts) it’s not clear what these words mean. “Functional” seems contingent on the needs of an agents and an arbitrary “search space” and arbitrary “good stones”. Is “complex” = 500 bits = minus log (base 2) of (Target space/Search space)? Again, maybe you got something there, but you really need to do a better job clarifying and simplifying.
    g) Maybe after further clarifications for a) and f). My point was that if (as we both agree) P(random) =~ 0, then we generally stop talking in probabilistic terms. Example: we say “sunrise will be at 7am tomorrow” not “there’s a 99.999(9)% chance the sunrise will be at 7am tomorrow”

  197. 197
    gpuccio says:

    Nonlin.org:

    Thank you for your very reasonable comments at #197. I appreciate that you are really trying to understand my points. I am trying to do the same with yours.

    I think that we agree on many important things, and disagree on a few equally important things. But trying to understand each other’s views is always a good thing.

    So, I will answer your points in detail, in more than one post, so that each point can be adequately discussed.

    In this firs post, I will just offer two general, important premises.

    a) First premise. Our epistemologies are probably different, but not necessarily too different. I have had a look at your page about science, philosophy and religion, and I think that I would agree with most of the things you say there.

    So, I will just summarize here my views about that issue, and let you decide how different they are from yours.

    Science, philosophy and religion are three different modalities of human cognition. I fully agree that they are strongly connected, and that they are only different facets of our search for truth. But I think that they have important specificities, that allow us to distinguish between them, and to recognize specific fileds of application fro each of them, even is of course partially overlapping.

    I strongly believe that the methods and procedures of each of the three types of cognition are linked to the specific field of application. When each of the three types of co0gnition correctly applies its procedures to its field, the results are of great value, and they help and support the other types of cognition. Instead, when on type of cognition tried to apply the pprocedures of another type of cognition to its specific field, the results are very bad, and they simply generate cognitive confusion.

    So, good science supports philosophy and religion, and good philosophy (or religion) do support science. But bad science creates probles to philosophy and religion, and bad philosophy (or religion) are a real problem to science.

    For example, philosophy of science and epistemology are extremely important for science, of course, but they are philosophical issues, not scientific issues. This “first premise”, therefore, is a philosophical argument, not a scientific one.

    That said, I will remind here that ID theory is a scientific theory, not a philosophy. This is important for the discussion that will follow.

    b) Second premise. This point is fundamental for all the discussion, so I will try to be as clear as possible.

    ID theory, at least in its biological aspect, is not about design in general: it is about design detection. Therefore, ID is not really interested in design in general, it is only interested in detectable design.

    This is very important. The purpose of ID is not to detect all forms of design, or to exclude design. ID theory can do neither of those two things.

    It cannot detect all designs, because many designs are not detectable.

    It cannot ever exclude design, because we cannot exclude design that is undetectbale.

    So, what is the purpose of ID, as applied for example to biological objects?

    It is to detect objects for which we can affirm a design origin, with reasonable empirical safety.

    I will exclude from the following discussion cosmological ID. Not because it is not a valid form of ID. It is, definitely. But because it is a reasoning about the design of the whole universe, and the things that I will say here do not apply to that scenario. Cosmological ID is a very valid argument, and I do believe that it demostrates, reasonably, that the whole universe is design, especially in its very scientific form based on fine-tuning of the fundamental constants of the universe. But biological ID has very specific aspects, that are different from those of cosmological ID. And it’s those aspects that I am going to discuss.

    So, what do we mean by “design detection”, if we exclude the cosmological problem?

    Design detection is a concept that applies to specific and well defined physical systems. It is never a generic statement. We detect design in specific objects, in a specific and well defined physical system.

    So, let’s say that we define a physical system S, and we define two states of it , A and B, and the time window between A and B.

    So, we can say that the system S evolves from state A to state B in the time window t.

    Now, let’s say that some configuration F arises in some object included in S during the time window t. IOWs, configuration F was not in A, and it is observed in B.

    Configuration F is not a generic configuration: it is a functional configuration. IOWs, the object with configuration F can implement a well defined function. I will deal with these points in detail later.

    Now, the point is: if we assume that system S evolves by the laws of nature that we understand, and that its configurations obey some probabililty distribution that we can effectively use to describe the system and its evolution. is configuration S (and the associated functionality) likely enough in the appropriate probability distribution? Or is it an extremely unlikely result?

    IOWs, if we draw a binary partition in the space of all possible configurations that system S can reach according to known laws of nature, is the target space of all the configurations that implement F extremely small, let’s say infinitesimal, if compared to the whole search space?

    What we are trying to assess here is not if F could be designed, or if it could be random. We are trying to assess if we can reasonably be sure that it is designed. IOWs, that some conscious intelligent and purposeful agent intervened in system S, during the time window t, to generate configuration F out of his conscious representations and by his intentional acts. Changing the spontaneous evolution of system S according to known natural laws.

    To be more clear, let’s say that our system S is a beach, with its connected events, like wind, rain and so on.

    From one day to the following, we observe that a small heap of sand appears on the beach, that was not there the day before.

    Now, let’s say that we had placed a camera to observe the beach during the lasdt 24 hours.

    Let’s say that we see in the camera recording one of the two following things:

    a) The wind moves the sand, and at some point the heap is formed.

    b) At some point, a child comes to the beach, and builds the heap with his hands. Then he goes away.

    In this case, we have direct observation of the process, by the camera. We say that, in case a), the heap is not a designed object in that system: the wind is part of the system, and we have no reasons to beolieve that it is a conscious intelligent being.

    In case b, however, we are sure that the heap is designed, because a child is a conscious intelligent being.

    But, of course, in the cases weher we apply ID theory we have no camera, and no direct observation of the process, or of the designer.

    So, if we just observe the heap, can we infer design? Is the heap a configuration that exhibits detectable design?

    Of course not. The point is that the heap could have very reasonably arisen from the action of the wind, even if it was instead built by a child. Design, even if present, is not detectable.

    But let’s say that the object we observe on day two is not a heap, but a Shakespeare’s poem written in the sand, by the shoreline. In this case, we do infer desing, and correctly.

    Why? Because here we have a configuration F which has a very specific function (meaning), and is utterly unlikely as a result of waves or wind or any other component of system S.

    So, again, we are interested only in detectable design, not design in general. And functional complexity is the tool to detect design when it is detectable.

    More in next post.

  198. 198
    gpuccio says:

    Nonlin.org:

    Now, your first point:

    a) You say:

    Interesting, but your FSI definition seems dependent on a particular intelligent agent and a very specific function. And “complex” is just having FSI above the threshold? Hmm, what threshold, and what’s the point of all this? The answer is probably hidden somewhere in your many posts and comments, but that’s not very helpful.

    OK, I will dig the answer for you and offer it briefly here. Indeed, it’s two different answer.

    1) Yes, my definition of FSI does use “a particular intelligent agent and a very specific function”. But it does not depend on them.

    Why? Because any oberver can define any function, and FSI for that function can be measured objectively, once the function is objectively and explicitly defined. IOWs, I can measure FSI for any explicilty defined function that the object can implement.

    So, is there an objective FSI for the object? Of course not. But there is an objective FSI fo each explicitly defined function that the object can implement.

    Now, please, consider the following point with great attention, because it is extremely important, and not so intuitive:

    If we can show even one single explicitly defined function that the object can implemet as defined, and which is complex (see next point for that), we can safely infer that the object is designed.

    Excuse me if I insist: stop a moment and consider seriously the previous statement: it is, indeed, a very strong statement.

    And absolutely true.

    One single complex function implemented by an object is enough to infer design for it.

    Another way to say it is that non designed objects cannot implement complex functions. Never.

    In next post, I will discuss the issue of complexity, and of the related threshold.

  199. 199
    gpuccio says:

    Nonlin.org:

    The second part of your first point:

    2) Complexity. We have seen that the FSI linked to a function is essentially the number of specific bits of information that are necessary to implement the explicitly defined function.

    This is, of course, a continuous variable, and it is corresponds to -log2 of the target space/search space ratio.

    We can derive a binary variable from the continuous variable by a threshold, so that we have: complex functional information yes/no.

    What threshold? It’s rather simple. The purpose of our reasoning is to ascertain that our functional configuration is so unlikely in the system that we can safely reject the null hypothesis that the observed effect (the function) can reasonably emerge in the system as a random result.

    Therefore, the threshold must be appropriate for the system.

    The property of the system that we have to consider is its probabilistic resources: IOWs, the number of attempts (configurations) that can be tried (reached) in the system, in the allotted time window.

    The binomial distriution is extremely useful to compute probabilities of success with repeated attempts.

    For example, if some result has a probability of 0.001 in a single attempt, the probability of observing at keast one such result in 10 attempts will be slightly less than 1%, but the probability with 200 attempts is about 18%.

    Therefore, the probabilistic resources of the system are very important.

    For a biological system on out planet, I think that 200 bits are a very appopriate threshold.

    However, in a general discussion, I usually stick to 500 bits, because that threshold is good even if we consider the probabilistic resources of the whole universe throughout its entire existence (it’s Dembski’s UPB).

    More in next post.

  200. 200
    gpuccio says:

    Nonlin.org:

    Your second point:

    b) You say:

    See first paragraph above. Nature doesn’t come with laws – they are written by humans based on what we observe. In addition, we continue to rewrite “the laws” based on new observations. If you disagree with “design = regularity” you should explain how you differentiate between the two. Just because “once the laws exist, there is no need for any conscious intervention for them to operate”, doesn’t mean “the results of laws are not designed”. If I design and set up a widget making machine, you better believe those widgets have been designed by me – the creator of the machine that makes them under my laws.

    I agree that we write the laws, but we write them to explain, with the best of our understanding, regularities that are really present in nature. So, maybe naure does not come with laws, but it certainly comes with regularities. Our laws are our way to describe those regularities. Let’s say that they are huma approximations of the “real law” that acting in nature.

    The idea is that our human approximations can certainly change and be made more precise, but there is no reason to believe that “the real law” changes at all.

    Of course I disagree with “design = regularity”. There is no necessary regularity in design. If I write a poem, it’s not that I aim at regularity: I aim at meaning.

    How do I “differentiate between the two”? It’s easy. If I see configurations that are fully explaine by laws alredy exisitng in the system, or by an appropriate probability distribtion which describes well the system, then I have no reasons to infer detectable design. But if I observe complex funtional information, I infer design.

    As I have tried to explain, in ID (excluding the cosmological application) we are not asking if the laws that we know to operate in nature are designed. That is a cosmological issue.

    We are asking if we are observing an object that cannot be explained by those laws, or by any reasonable probabilistic result, and requires instead an explicit intervention by a conscious being in the system and in the allotted time window to emerge.

    If system S at time A already includes a computer that is operating an existing software, all that the software can compute will be a result that can be explained in that system without any design intervention after the initial state A is set.

    But if we observe that some configuration arises in the system that cannot be computed by the resources that are already part of the system, either they are non designed resources or designed resources (for example, the computer and the software), then we infer a design intervention in the system in the time window.

    For example, let’s say that a Shakespeare sonnet emerges in the system during its transition from state A to state B, and that the computer included in the system at state A has not the information to output that poem (it has not it in memory anywhere, and of course it has no probability of deriving it from a computation).

    Then we infer design: someone had to introduce the FSI of the poem into the system, in the time window which goes from A to B.

    So, it’s not relevant here if the laws of the universe are designed ot not: if I observe in system S a functional result that cannot be explained by the laws of the universe, and whose probability in the system is infinitesimal, then I can infer design in the system.

    More in next post.

  201. 201
    ET says:

    So Allan Keith makes an ignorant claim in comment 177, gets called on it (178) and runs away.

    Typical but still pathetic

  202. 202
    OLV says:

    gpuccio,
    excellent explanations! Serious textbook material.
    Thanks.

  203. 203
    gpuccio says:

    Nonlin.org:

    Your third point:

    c) You say:

    The watch can be nonfunctional (as in watch sculptures) and will still show design. Again, the regularities of the shapes and materials is enough.

    You are mentioning two different design inferences, for two different functions.

    Of course design can be probably inferred for regularities of shape and materials, and in that case all materials that exhibit some regularity in shape and material could be considered designed. But of course you have to define well what you mean as “regularity” here.

    But of course a watch is functional mainly because it measures time. Most of its functional complexity can be traced to measuring time: the specific choice of parts (there are many parts with regularities, but only some of them can be used to make a watch), and in particular thier specific assemblage, and the tweaking of each part to be compatible with the others, and so on.

    There is no doubt that Paley intended this kind of functionality, when he chose a wacth as his example of design inference.

    The inference for the watch based on its true function is much stronger than an inference for some well formed part that could be used for some generic purpose. A gear is most certainly a designed object, but a watch has much greater functional information, if we define the correct function for it.

  204. 204
    gpuccio says:

    OLV:

    Thank you! 🙂

  205. 205
    gpuccio says:

    Nonlin.org:

    Your fourth point:

    d) You say:

    d) See b) Also, “action of weather” is just an intermediate step, not the ultimate source of the patterns. And the point was: if you see a pattern you know for sure it’s not just random – there’s a regularity behind it which is indistinguishable from design. Even chaos theory patterns are the result of a designed system: https://en.wikipedia.org/wiki/Chaos_theory. I am pretty sure no one can explain why – based on the known laws of physics – dunes, hurricanes, galaxies, etc. have these precise patterns and not other patterns.

    Chaos systems are fully deterministic systems. From the Wikipedia page:

    Small differences in initial conditions such as those due to rounding errors in numerical computation yield widely diverging outcomes for such dynamical systems, rendering long-term prediction of their behavior impossible in general.[2][3] This happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, with no random elements involved.[4] In other words, the deterministic nature of these systems does not make them predictable.[5][6] This behavior is known as deterministic chaos, or simply chaos.

    IOWs, they are fully deterministic, and they are not designed, in the sense that I have discussed. Again, I am not discussing if the laws that determine their form, which are the same laws of nature that we know, are designed or not. But there can be no doubt that, once those laws are accepted as part of the system, the results are determined.

    The only peculiar property of chaotic systems is that they have some physical properties that make the math that describes them highly sensitive to small differences in initial conditions. That’s why it is impossible to predict their behaviour, even if they are deterministic. And they cannot even be well described probabilistically. Special math is required to treat them satisfactorily.

    But they are, again, an example of necessity, not of design.

    Your last statement is rather vague. You say:

    “I am pretty sure no one can explain why – based on the known laws of physics – dunes, hurricanes, galaxies, etc. have these precise patterns and not other patterns.”

    I am not so sure as you are. I think that the known laws of physics explain pretty well why dunes and hurricanes are formed. There is no reason to explain a specific contingent pattern, as there is no reason to explain some specific sequence of heads and tails that arise from on single sequence of coin tossing.

    Laws explain contingency in general, but of course we cannot “explain” each single contingent pattern, not because it is not possible, but because we have not precise knowledge of all the values of the different variables. That’s why we treat those deterministic systems probabilistically. As I have already explained to you at #112, also quoted at #167. I quote it again here:

    “Usual randomness just means that there is some system whose evolution is completely deterministic, but we can’t really describe its evolution in terms of necessity, because there are too many variables, or we simply don’t know everything that is implied.

    In some cases, such a system can be described with some success using an appropriate probability function. Probability functions are well defined mathematical objects, which can be useful in describing some real systems.

    A probabilistic description is certainly less precise then a necessity description, but when the second is not available, the first is the best we can do.

    A lot of empirical science uses succesfully probabilistic tools.”

    Regarding galaxies, I would be cautious. Science certainly believes that they can be explained according to known laws, but as in all matters in astrophysics, nothing is really completely understood. There are models, of course, but models have a serious tendency to last only for a short time in astrophysics! 🙂

    So, I would be cautious. Anyway, if they can be explained by laws, we cannot infer design for them, even if we cannot explain each contingent form they assume.

  206. 206
    Origenes says:

    nonlin: So you agree one “cannot say FOR SURE” (rephrased as “cannot exclude the possibility of random production”) but then claim I am wrong?

    Yes, of course. Your claim that the outcome tells us NOTHING about the randomness of the production is 100% wrong. I have explained why this is so in #115, #137, #143 and #162.

    If solid evidence points to a murderer who is a female in her thirties with black hair and extraordinary surgical skills, but we “cannot say FOR SURE” who she is, then this is not the same, as you claim, as knowing NOTHING.

    Similarly, as explained, the outcome tells us a lot about the randomness of the production.

  207. 207
    Origenes says:

    Nonlin: Nature doesn’t come with laws – they are written by humans based on what we observe. In addition, we continue to rewrite “the laws” based on new observations.

    I strongly disagree, nature does come with laws. Nonlin’s second sentence reveals that he simply fails to distinguish between the law ‘an sich’ and our description of it. Surely, we did not write gravity, but, obviously, we attempt to describe it.

    Moreover, there is no ‘bottom-up’ explanation of the laws, as theoretical physicist Paul Davies wrote:

    Physical processes, however violent or complex, are thought to have absolutely no effect on the laws. There is thus a curious asymmetry: physical processes depend on laws but the laws do not depend on physical processes. Although this statement cannot be proved, it is widely accepted.

    If A does not depend on B, then A cannot be explained by B. Put another way: if the laws are explained bottom-up by fermions and bosons, then we would expect the laws to be prone to change — different circumstances different laws. But this is not what we find.

    Cosmologist Sean Carroll: There is a chain of explanations concerning things that happen in the universe, which ultimately reaches to the fundamental laws of nature and stops… at the end of the day the laws are what they are…

    Translation: We have no explanation for the laws. They are truly ‘fundamental’. We don’t know where they come from, we don’t know where they are, we don’t know how they cause things to happen.

    Cosmologist Joel Primack: What is it that makes the electrons continue to follow the laws?

    Indeed, what power compels physical objects to follow the laws of nature?

    Paul Davies: There has long been a tacit assumption that the laws of physics were somehow imprinted on the universe at the outset, and have remained immutable thereafter.

  208. 208
    gpuccio says:

    Nonlin.org:

    Your fifth point:

    e) You say:

    e) We’re assuming the designer is invisible and you can only determine design based on the output. Yes, something can be designed to look random, but I don’t think we’re concerned with that scenario. Fact is, we see a lot of patterns that are clearly nonrandom especially in biology. In fact I can’t think of anything in nature that can be attributed 100% to randomness. Even the atmospheric noise used as random generator has a deterministic component in its statistics and boundaries.

    Yes, I got your comment 112. You refer to chaos theory. That’s compatible with my “randomness is ONLY a theoretical concept”, so why do you disagree, and how would you prove it wrong? I don’t agree with “all phenomena that we describe as random are completely deterministic”. How would you know? And if quantum events are an exception, and of course all particles are subject to quantum events, then of course all systems are NOT deterministic.

    I think that the answers to these points have already been given.

    You say:

    “I don’t agree with “all phenomena that we describe as random are completely deterministic”. How would you know?”

    We can onserve all the time deterministic settings that produce outcomes that are well described by probability distributions, and cannot be described. Many deterministic variables that act independently generate probabilistic distributions.

    Look, I will give you a simple example from genetics.

    If we have a recessive trait, such as beta thalassemia, and we have two parents who want to have a child and who are both heterozygous for the trait, and you are giving genetic counseling, you cannot tell them if their future child will be healthy, heterozygous for the trait, or affected by the disease. Nobody can know that before conception, indeed only after some time a prenatal diagnosis will be possible.

    But you can tell the parents about probabilities. Their children, if they had a lot of them, would more or less be distributed according to a very simple probability distribution: 0.25% healthy, 0.5% heterozygous, 0.25% homozygous (with the disease). Because this is a mendelian trait.

    Now, let’s take a more complex trait: height. We know very well that there is a very verifiable relationship between the height of the parents and the height of the children. But this is not a mendelian trait. It is a polygenic trait, one that is probably influenced by hundreds of different, independently transmitted genes.

    Moreover, non genetic factors, like nutrition, or diseases, are also implied in the final outcome.

    All of these factors, the hundreds of independent genes, and non genetic factors, all act deterministically to cause the height of each individual. But we cannot compute what the future height of an individual will be, because we don’t know all those variables.

    Still, the influence of the parents’height can be factores, and it gives some useful information.

    What happens when a variable like height is controlled by so many independent deterministic factors? It’s interesting. What happens is that the vairable, in a poopulation, assumes a normal distribution. That’s exactly what happens with height, and with many other similar biological variables.

    The normal distribution is just a mathematical object. And yet, it is the best tool that we have to describe, and analyze, this type of biological variables.

    These are just examples of how we know that deterministic systems can generated distributions of outcomes that are best described by probability distributions. Which is exactly my point.

    Your final argument is more interesting. You say:

    “And if quantum events are an exception, and of course all particles are subject to quantum events, then of course all systems are NOT deterministic.”

    Not exactly.

    First of all, quantum realities are essentially deterministic, because the wave function, that is the essential component of quantum theory, is completely deterministic.

    But, of course, there is also a probabilistic componemt, what is usually called the “collapse of thw wave function”.

    These are all very controversial issues, as you probably know, at lest in their intepretation.

    But my point is much simpler. It is true that everything that exists is, first of all, a quantum reality.

    But, when we describe macroscopic reality, qunatum effects, in particular the probabilistic collapse of the wave function, can absolutely be ignored.

    Why? Because they are irrelevant, for all purposes.

    At the level of classical physics, regarding macrosopic objects, the effects of quantum events are not detectable. The probabilistic effects become necessity laws, and those laws work wiht remarkable precision and efficiency.

    At the level of particles, instead, quantum effects are extremely important.

    There are a few exceptions to this rule: there are macroscopic systems where quantum effects are important, and perfectly detectable.

    See, for example, this Wikipedia page:

    Macroscopic quantum phenomena

    https://en.wikipedia.org/wiki/Macroscopic_quantum_phenomena

    But these are exception. The rule is that almost always quantum effects have no importance at macroscopic level. And the behaviour of macroscopic objects is deterministic, for all practical purposes.

    The ointerventions of consciousness on matter are a possible, interesting exception. If, as I (and many others) believe, the interface between consciousness and matter is at quantum level, that would allow the action of consciousness to modify matter without apparently interfering with gross determinism. That would also explain how design takes place.

    But that is another story! 🙂

  209. 209
    Nonlin.org says:

    Origens@207

    You did not “explain” anything as you are terribly confused.

    Per your example, no one will convict on the basis of: “cannot exclude the possibility of random production”.

    The outcome only tells you something about a random process if you already assume the process is random. And that is circular logic.

    This repeat conversation is getting boring. If you don’t understand, so be it. I am done.

  210. 210
    Nonlin.org says:

    Origens@208

    What are you talking about? Newton’s laws of physics are wrong at the atomic level – an example of overturned “laws”. And “Central dogma of molecular biology” has also been proven wrong. All “laws” are formulated by humans based on their limited knowledge at the time.

    Who knows what else we will discover next that will overturn “the current laws”? You know next to nothing about gravity, so how would you know there even is such thing as gravity?

    Your cosmologists might as well be astrologists. They’re no good for anything other than making up ridiculous nonsensical stories for the uninformed. Here’s an insider exposing their nonsense: https://backreaction.blogspot.com/

  211. 211
    Nonlin.org says:

    gpuccio@209

    Ok, so the Mendelian trait example is a classic. But height is not even a proper biologic measure because height changes all the time, not just during development and because it is arbitrarily determined. Just as well you can sort by vertical reach or eyes height (on or off tiptoes), etc. – these can be more important for survival than the standard measurement and will throw off your statistics. Also food/climate/parasites during development affect size at maturity. And when exactly is maturity?

    Again, you assume but do not prove (how could you?) that height is deterministic. Yes, it can be described statistically – I never claimed otherwise. So what? Sorry, I just don’t see the “determinism” claim being well supported. Why do you insist so much on determinism?

    How would you know “regarding macroscopic objects, the effects of quantum events are not detectable”? Say you have a double slit experiment and on the other side a number of scared rats that can see one photon (can they? Humans can) and take off in fear in different directions knocking down one domino set or another. That’s your quantum impact on macroscopic events.

    Yes, consciousness (don’t you mean Free Will?) would interfere with determinism for sure.

    Agree, we don’t have to solve any of these today 🙂

  212. 212
    Origenes says:

    nonlin: The outcome only tells you something about a random process if you already assume the process is random.

    Of course not. What’s wrong with you?
    There is a process A which can be random or not — we do not know. Now the outcome of process A can tell us two things:
    1. The outcome can be consistent with a process A being random, in which it tells us that process A could be random.
    2. The outcome can be inconsistent with a process A being random, in which it tells us that process A cannot be random.

    Either way, the outcome certainly tells us something about the randomness of process A — contrary to your false claim that it tells us nothing.

    —-

    nonline:Newton’s laws of physics are wrong at the atomic level .

    As I explained to you already, Newton laws are, in fact, descriptions of laws. They are not the laws ‘an sich’.

    nonline: … an example of overturned “laws”.

    Overturned descriptions of laws. Gravity itself was not overturned.
    As GPuccio wrote:

    Our laws are our way to describe those regularities. Let’s say that they are huma approximations of the “real law” that acting in nature.

    The idea is that our human approximations can certainly change and be made more precise, but there is no reason to believe that “the real law” changes at all.

  213. 213
  214. 214
    gpuccio says:

    Nonlin.org:

    Look, you are free to think as you like. But it is really difficult to discuss if your arguments are of the kind:

    “height is not even a proper biologic measure because height changes all the time, not just during development and because it is arbitrarily determined”

    !!! What do you mean?

    Height is not a proper biologic measure???

    We measure height in all kinds of populations (OK, in neonates we measure length! 🙂 ). The values are gathered according to age, and means and percentiles and all kinds of statistical parameters can be derived from those values.

    If a child deviates strongly from the expected height curve, a disease can be suspected, and often demonstrated. Growth hormone deficiency is one of the most common cases. Isn’t that deterministic?

    That height has a strong genetic component, of the polygenic type, is well known and well demonstrated. This is determinism, with an outcome that is influenced by many different variables (including those not genetic, which are as deterministic as the genetic), and therefore can best be described probabilistically. Which is my point. Or would you deny that serious nutritional deficiency can affect growth? Is that a quantum effect, in your opinion?

    I insist so much on determinism because, of course, all science studies deterministic effects. Either directly, ot in probabilistic form. Deterministic they are, just the same.

    You seem obsessed by the strange idea that randomness is something different from determinism. That idea is completely wrong. Randomness is only a form of determinism, where we cannot analyze the variables in detail.

    Even if quantum probability wee intrinsic, it is connected to determinism just the same: the probabilities of observed measures are dictated with extreme precision by the wave function, and the wave function is a completely deterministic reality.

    Your example of quantum effects on the macroscopic world is simply wrong. We all react to things that, in some way, derive from quantum effects that have “collapsed”. A table is a repository of quantum effects that have collapsed, and therefore we perceive it as a solid and stable reality.

    Quantum reality is different from traditional physics at the level of the wave function, before we observe it or measure it. We, like the rats, can certainly react to quantum wave functions that have become measurable things, with specific directions and positions and so on. IOWs, they can be descriebed by traditional, fully deterministic physics.

    Superconductivity, instead, is an example of a macroscopic system where quantum behaviours can be demonstrated.

    Again, you can believe as you like, but your ideas are, very simply, a denial of science and of all that we know. I will not follow you there.

    The only thing we seem to agree about is that:

    “Yes, consciousness (don’t you mean Free Will?) would interfere with determinism for sure.”

    That’s absolutely true! 🙂

  215. 215
    gpuccio says:

    To all:

    I have just posted a comment on the Ubiquitin thread. It is pertinent to the discussion here, too (see the part about E3 ligases in the OP). So, I paste it here too:

    The fact that different E3 ligases can interact with the same substrate has been presented by our kind friends from the other side as evidence of their “promiscuity” and poor specificity.

    Of course, I have pointed to the simple fact, supported even by the authors of the paper they referred to, that different E3 ligases could bind the same substrate, but in different contexts. Therefore, that is a sign of extreme specificity, not of promiscuity.

    See comment #834 here. This is the relevant statement from the quoted paper:

    Significant degrees of redundancy and multiplicity. Any particular substrate may be targeted by multiple E3 ligases at different sites, and a single E3 ligase may target multiple substrates under different conditions or in different cellular compartments. This drives a huge diversity in spatial and temporal control of ubiquitylation (reviewed by ref. [61]). Cellular context is an important consideration, as substrate–ligase pairs identified by biochemical methods may not be expressed or interact in the same sub-cellular compartment.

    Well, here is a brand new paper that shows clearly how different E3 ligases target the same substrate at different steps of the cell cycle, and with different functional meaning. The “huge diversity in spatial and temporal control of ubiquitylation” is here clearly demonstrated.

    The HECT-type ubiquitin ligase Tom1 contributes to the turnover of Spo12, a component of the FEAR network, in G2/M phase.

    April 23, 2018

    https://www.ncbi.nlm.nih.gov/pubmed/29683484

    Abstract
    The ubiquitin-proteasome system plays a crucial role in cell cycle progression. A previous study suggested that Spo12, a component of the Cdc fourteen early anaphase release (FEAR) network, is targeted for degradation by the APC/CCdh1 complex in G1 phase. In the present study, we demonstrate that the Hect-type ubiquitin ligase Tom1 contributes to the turnover of Spo12 in G2/M phase. Co-immunoprecipitation analysis confirmed that Tom1 and Spo12 interact. Overexpression of Spo12 is cytotoxic in the absence of Tom1. Notably, Spo12 is degraded in S phase even in the absence of Tom1 and Cdh1, suggesting that an additional E3 ligase(s) also mediates Spo12 degradation. Together, we propose that several distinct degradation pathways control the level of Spo12 during the cell cycle.

    So, we have:

    a) One target: Spo12

    b) Three different functional moments:

    – G1 phase: control implemented by the APC/Ccdh1 E3 ligase

    – G2/M phase: control implemented by the Tom1 E3 ligase

    – S phase: control probably implemented by addirional E3 ligase(s)

    One substrate, three different functional contexts, three different E3 ligases: this is specificity at its best! 🙂

  216. 216
    gpuccio says:

    OLV at #214:

    Interesting. 🙂

    DNA and chromatin states are certainly a major component of transcription regulation, and probably still the least understood.

  217. 217
    uncommon_avles says:

    At what level of magnification does ID’s 500 bit complexity test start and stop? By ID standards, everything is complex because at atomic level obviously the electron can’t exist in the probability cloud. It should have fallen into nucleus, right?

  218. 218
    Nonlin.org says:

    Origens@213

    There is a process A which can be random or not — we do not know. Now the outcome of process A can tell us two things:
    1. The outcome can be consistent with a process A being random, in which it tells us that process A could be random.
    2. The outcome can be inconsistent with a process A being random, in which it tells us that process A cannot be random.

    1. It could be or it may not be as shown. Or it could be a combination as in “only 1 to 6 outcomes w. uniform distribution – see dice”. Therefore it doesn’t tell you “it is”.

    For “could be” to have any value, you must attach a probability. And you can’t because any random sequence can also be non-random generated! There is no such thing as: “given this outcome, there’s an X % probability the process is random”. Check you statistics book! Get it?

    2. It could still be random with almost zero probability. We generally take that as “not random”.

    As I explained to you already, Newton laws are, in fact, descriptions of laws. They are not the laws ‘an sich’.

    nonline: … an example of overturned “laws”.

    Overturned descriptions of laws. Gravity itself was not overturned.

    Not descriptions of “laws” but descriptions of ‘observations’. That’s a huge difference you keep missing. And yes, we call these descriptions of ‘observations’, “laws”. Get it?

    We will never know ‘gravity’ but we will have ‘observations’ consistent with ‘gravity’… and one day a black swan shows up and we call that “black matter”. But maybe there is no “black matter” and in fact “the law” needs to change.

  219. 219
    kairosfocus says:

    UA, ever took apart a fairly simple mechanical contrivance such as a fishing reel? Notice, how it is made of arranged, coupled parts that work together to achieve function? Where, parts use materials, and so forth? Now, apply to the body plan and associated structures. The same obtains, and a logical first answer is to parts, wholes and to the assembly-coupling process. That is a commonplace, not hard to see; and yes there is fuzzyness around the edges of concepts, scales etc but not enough to twist the point into the meaninglessness you seem to want to get to. Now, go to the cell, considered as a body plan in its own right. We now have organelles, molecules, membranes and so forth. Molecular nanotech parts. Much of this turns on AA sequence chains, folding and assembly, most famously with the flagellum. Parts, assemblies, wholes. Next ponder D/RNA and info coding, here we see parts, assembly, wholes that use framing techniques. Nobel Prize level work identified codes and we have seen associated machinery that fits with the classic info system model, as say Yockey pointed out. All of this, despite fuzziness. KF

  220. 220
    OLV says:

    gpuccio (217):

    DNA and chromatin states are certainly a major component of transcription regulation, and probably still the least understood.

    Very interesting statement.
    Could those states be associated with ID theory too, even if they are not straightforwardly quantifiable (at least at this moment)?
    Thanks.

  221. 221
    kairosfocus says:

    Nonlin, actually, probability, plausibility, needle in haystack search challenge and linked themes take on importance long before we get to scales and values on probability models. For instance, we can readily show that 3-d functional organisation can be reduced to description languages, e.g. Autocad etc and in the end structured Y/N chains. We can then ponder a von Neumann replicator with a constructor that reads and effects the codes. From this, we can see that a coherent functional entity can be identified and we can play with the config space for components and for assembly. It is not hard to see that function comes in deeply isolated islands in the space of possibilities. A 500 – 1,000 bit string has 3.27*10^150 to 1.07*10^301 possibilities. It is easy to see that 10^57 atoms changing at 10^13 to 10^14 states/sec or 10^80 at similar rates (fast for organic type reactions) will only be able to sample very small fractions of such config spaces in 10^17 s, about the timeline from a big bang. To appeal to blind chance and/or mechanical necessity is then a futile strategy to explain FSCO/I — an appeal to a long chain of statistical miracles. And already, just on D/RNA and protein synthesis where we have six basic bits per three base codon and 4.32 bits per AA in a protein, we are utterly beyond the relevant threshold. There is just one empirically warranted, analytically plausible explanation for FSCO/I rich systems, design. And the rhetorical gymnastics exerted to duck that only inadvertently underscore the strength of that design inference to best explanation on tested, reliable sign. Where, no this is not appeal to incredulity, it is inference to best explanation anchored on massively evident empirical facts and linked analysis as outlined. The selective hyperskepticism and turnabout projection so many objectors resort to to dodge an inference supported by a trillion member observational base speak volumes. KF

  222. 222
    gpuccio says:

    uncommon_avles:

    All my examples here are at the protein sequence level.

    If I understood your “point” about electrons, I would certainly answer.

  223. 223
    OLV says:

    kairosfocus (220,222):

    Very interesting comments.
    Thanks.

  224. 224
    gpuccio says:

    OLV:

    “Could those states be associated with ID theory too, even if they are not straightforwardly quantifiable (at least at this moment)?”

    Yes, of course. All transcription regulations, indeed all forms of regulation, are most probably complex, and can in principle be analyzed by ID theory. Of course, there are certainly problems in quantifying the functional information, and that’s why I stick to protein sequences.

  225. 225
    gpuccio says:

    KF:

    Yes, thank you for #220 and #222: very clear, as usual. 🙂

  226. 226
    Nonlin.org says:

    Origens@213

    Following my comment @219, what trips you is the asymmetry between random and non-random. While non-random (design) can easily look random, it’s almost impossible for random to look non-random for anything larger than a few bits.

    Data communication systems do their best to output random-like data for protection and for communication efficiency. On the other hand, ‘infinite monkey’ experiments have and will always fail: https://en.wikipedia.org/wiki/Infinite_monkey_theorem

    kairosfocus@222

    Did you mean to reply to someone else? While I might agree with your argument, I find it cumbersome, hence not persuasive. As I mentioned before, probabilities get extreme very fast so you don’t need to distill the ocean (or, in this case, the universe). See my comment 219 and the whole discussion with Origens. I am hopeful he’ll get it this time 🙂

  227. 227
    Nonlin.org says:

    gpuccio@215,

    You’re being exposed to ideas you’ve not seen before, so your negative reaction is totally understandable.

    This thread is not about height so, to be brief: a) Inert objects also have heights. b) To your examples, the information comes from statistical deviation, not from “height”. If your subject is a midget, pygmy, of different age, or a turkey, you won’t draw any conclusions from “height”.

    We’re not reaching any conclusions on determinism. How can you say “everything is deterministic” AND “Free Will interferes with determinism”? I am not obsessed with anything – just trying to understand your point and what makes you so sure. Last I checked, Randomness that comes from deterministic systems is called pseudo-random: https://www.random.org/. Also, “Wave function collapse” is only one interpretation in quantum mechanics.

  228. 228
    gpuccio says:

    Nonlin.org:

    I am rather accustomed to “ideas I’ve not seen before”. Believe me.

    But they must gain my interest for their merits.

    I find merit in some of the things you say. But not in many others.

    a) Inert objects also have heights.

    They certainly have dimensions. Height, in my context, was obviously used for humans. It’s not clear what is your point about objects having heights. I think there is no point at all.

    b) To your examples, the information comes from statistical deviation, not from “height”. If your subject is a midget, pygmy, of different age, or a turkey, you won’t draw any conclusions from “height”.

    Nonsense. The information comes from how much height deviates from a reference population. Of course the reference population must be appropriate. Firts of all, height is usually expressed for age groups. For children, you have very exact percentiles for age.

    If you are a “midget”, whatever you mean, you could be affected by a specific disease. If you are a tuekey, or a pigmy, you should use reference charts for turkeys or pigmys.

    That’s how science is done. The pertinent field is called Auxology.

    How can you say “everything is deterministic” AND “Free Will interferes with determinism”?

    I have never said that everything is deterministic. What I have said is:

    “I insist so much on determinism because, of course, all science studies deterministic effects. Either directly, or in probabilistic form. Deterministic they are, just the same.”

    Of course free will is not deterministic. But the systems studied by science, either by strict necessity or by probability distributions, are deterministic.

    Last I checked, Randomness that comes from deterministic systems is called pseudo-random: https://www.random.org/.

    You don’t even understand the pages you quote. From that page:

    In reality, most random numbers used in computer programs are pseudo-random, which means they are generated in a predictable fashion using a mathematical formula.

    IOWs, most simple softwares that generate random numbers in a computer do that through rather simple algorithms, and the result is predictable, even if it has some properties of a probabilistic distribution. That’s why they are called “pseudo-random”, not because the system is deterministic, but because the system is a rather simple algorithm, and it cannot really imitate the huge number of variables in a true natural deterministic system that generates a probabilistic distribution.

    Again from the page:

    This is fine for many purposes, but it may not be random in the way you expect if you’re used to dice rolls and lottery drawings.

    RANDOM.ORG offers true random numbers to anyone on the Internet. The randomness comes from atmospheric noise, which for many purposes is better than the pseudo-random number algorithms typically used in computer programs.

    IOWs, they are using a more sophisticated way of generating random numbers, a way that is more similar to natural contexts.

    As clearly stated, dice rolls and lottery drawings are still the best models of random distributions. And, of course, they are fully deterministic systems.

    When we roll dice, the result is fully determined by the laws of mechanics and of classical physics. The same can be said for lottery drawings.

    The random effect is simply due to the fact that we cannot predict the result, or control it. Simply because there are too many variables. Like in human height.

    But those systems are deterministic. There is no effect of free will in them.

    Also, “Wave function collapse” is only one interpretation in quantum mechanics.

    I know, and so? If you are a fan of hidden variables, that is even worse for your position.

    However, nobody really doubts that the wave function is fully deterministic, which was my point.

  229. 229
    Origenes says:

    Nonlin @219, 227

    For “could be” to have any value, you must attach a probability.

    Surely not. For science “could be” is important information on its own.

    It could still be random with almost zero probability. We generally take that as “not random”.

    Indeed. So, an outcome can tell us, with a probability bordering on certainty, that a process is not random. This renders your claim that the outcome tells us “nothing” about randomness bunk.

    Not descriptions of “laws” but descriptions of ‘observations’.

    Wrong again. Descriptions of observations do not amount to a description of laws. One has to ‘see’ lawful regularities ‘in’ multiple observations, in order to describe the laws.

    That’s a huge difference you keep missing. And yes, we call these descriptions of ‘observations’, “laws”. Get it?

    No, because nonsense is incomprehensible. Descriptions of observations do not amount to a description of laws.

    We will never know ‘gravity’ …

    What does that even mean? Do you mean that we will never come up with an accurate description of gravity? How would you know?

    … and one day a black swan shows up and we call that “black matter”. But maybe there is no “black matter” and in fact “the law” needs to change.

    There you go again … mixing up descriptions and/or observations with the thing itself. We need to change our description of the law; we cannot change the law itself.

    While non-random (design) can easily look random ….

    FYI non-random does not equal design — non-random can be design or law.
    – – – – –

    Nonlin (to GPuccio): You’re being exposed to ideas you’ve not seen before, so your negative reaction is totally understandable.

    ROFL

  230. 230
    OLV says:

    gpuccio (225):

    All transcription regulations, indeed all forms of regulation, are most probably complex, and can in principle be analyzed by ID theory.

    That’s clear. Thanks.

  231. 231
    uncommon_avles says:

    gpuccio @ 223

    All my examples here are at the protein sequence level.
    If I understood your “point” about electrons, I would certainly answer.

    My point is quite simple – ID’s 500 bit threshold to determine if something is made by an agency depends on the magnification you use to examine a process/ object. At lower magnification (when “complex” cell mechanisms were not know by scientists),a process like combining of cells would not be above ID’s 500 bit threshold.
    At atomic level magnification since everything has complex atomic structure and “impossible” rotation of electrons in different orbitals (probability cloud) around nucleus, “impossible” existence of leptons and gluons, obviously everything on earth will be above the ID’s 500 bit threshold. So isn’t this 500 bit threshold just a farce ?

  232. 232
    Nonlin.org says:

    gpuccio@229

    Hopefully, no one would draw any medical conclusions based on height alone. The point was that it’s a convention to measure the way we do. We may infer the same (or better) statistical deviation from say “extended arms height”, total volume, weight (which is being measured) etc. We’re looking for statistical deviation, not for height specifically.

    You said: “all phenomena that we describe as random are completely deterministic”. And your answers to “how would you know?” were inadequate. At a minimum you should be warry of such categorical claims.

    I don’t know that this is true either: “all science studies deterministic effects”, since we do study free will.

    None of your quotes negates my statement: “Last I checked, Randomness that comes from deterministic systems is called pseudo-random”.

    You keep insisting but have no way to prove: “When we roll dice, the result is fully determined by the laws of mechanics and of classical physics. The same can be said for lottery drawings” …and “But those systems are deterministic”.

    The wave function is just the mathematical probability function so cannot be deterministic more than say a circle is “deterministic”. But individual events are unpredictable.

  233. 233
    Nonlin.org says:

    Origenes@230

    It’s not “could be” but “could be random” – which is downright dumb… especially in statistics. Sure, anything “could be random”.

    If “a process is not random”, that tells you something about “randomness”?!? Wow!

    What the heck can “lawful regularities ‘in’ multiple observations” mean? Total nonsense.

    We will never know ‘gravity’ means that unless God tells us “this is gravity”, we can never be 100% sure. Not that you can understand…

    “mixing up descriptions and/or observations with the thing itself” – no, you’re mixing stuff you can confirm with stuff you just imagine you understand.

    “non-random can be design or law”. What the heck is “law” and how is it different than ‘design’? Presumably not something coming from the politicians.

  234. 234
    OLV says:

    Nonlin.org (233):

    I don’t know that this is true either: “all science studies deterministic effects”, since we do study free will.

    “we do study free will”

    1. “we”? Who?
    2. Does “we” = “science”?

    Thanks.

  235. 235
    OLV says:

    Nonlin.org,

    I took a quick look at your interesting website.

    In the title of this article:
    http://nonlin.org/cow-reptiles/
    shouldn’t it be “come” instead of “came”?

    Thanks.

  236. 236
  237. 237
    Origenes says:

    Nonlin @234

    …“could be random” – … is downright dumb… especially in statistics. Sure, anything “could be random”.

    Wrong again. Not anything can be random. Spaceships, jet airplanes, nuclear power plants, libraries full of science texts and novels and supercomputers running partial differential equation solving software do not come about by random processes. You do not understand the design inference. In fact, you are continually missing the whole point.

    If “a process is not random”, that tells you something about “randomness”?!? Wow!

    Sure, it tells us that the randomness of the process borders zero. Why are these simple matters so difficult for you to understand?

    What the heck can “lawful regularities ‘in’ multiple observations” mean? Total nonsense.

    It means that one grasps the regularities by comparing multiple observations. By doing so one can ‘see’ the effects of the law. This is pretty basic stuff Nonlin …

    We will never know ‘gravity’ means that unless God tells us “this is gravity”, we can never be 100% sure.

    How would you know? Reference please.

    “non-random can be design or law”. What the heck is “law” and how is it different than ‘design’? Presumably not something coming from the politicians.

    You are making less and less sense.
    Read #208 & this article by Paul Davies.

  238. 238
    gpuccio says:

    Nonlin.org:

    I don’t want to repeat my arguments. You can think as you like.

    You are accumulating a series of meaningless statements:

    “Hopefully, no one would draw any medical conclusions based on height alone.”

    Yes, and so?

    “The point was that it’s a convention to measure the way we do.”

    A convention? We measure human height according to its definition.

    “We may infer the same (or better) statistical deviation from say “extended arms height”, total volume, weight (which is being measured) etc.”

    Those are different variables. Weight is different form height. Deviations from expected weight have a different medical meaning. These are all issues well analyzed in Auxology, a scientific discipline that you seem to ignore.

    “I don’t know that this is true either: “all science studies deterministic effects”, since we do study free will.”

    There is no way to study free will scientifically. It is a merely philosophical issue, well beyond the boundaries of science. Because it is connected to the transcendental nature of the conscious “I”.

    Even human sciences, like psychology or sociology, cannot study free will. They can study human behaviour, but they really deal with those parts of human behaviour that are predictable, and therefore are not a model of free will.

    OK, I will make a last attempt at clarifying the issue of determinism and probability. I will make a very simple example, but please answer precisely to my questions, because your position has remained vague and undefined, up to now.

    Let’s go again to rolling dice.

    Just to avoid any distraction, let’s avoid any intervention of conscious agents.

    So, we have an automated system, which can toss a die repeatedly. The system has many uncontrolled variables: the die falls from above, in variable positions, and then a spring tosses it in the air, where its trajectory is practically unpredictable. It could even bounce on the walls of the system, always practically unpredictable.

    The system makes 10000 tosses, and the results are recorded. In the end, the six possible outcomes are distributed very much in accord with the expected uniform distribution, confirming that such a distribution, with a probability of 1.6666.. for each independent outcome, describes very appropriately the system.

    Now, my point about determinism is very simple. The trajectory of each single toss is completely deterministic.

    Why do I say that? Because, of course, we know from Newtonian mechanics that the trajectories of physical objects can be accurately described by the laws of mechanics, with an extremely high degree of precision, considering the forces that act on the object, its initial position, the mass and shape of the object, the gravitational field, friction, and so on.

    If those deterministic laws were not so precise, we could not send our space vehicles anywhere.

    Those same laws that determine the trajectory of a space vehicle equally determine the different trajectories of our dice. Those trajectories are completely deterministic, each of them.

    And yes, 10000 different trajectories, determined by the same laws but with different values of the involved variables, generate a very correct random distribution of the outcomes: each outcome remains practically unpredictable, but the set of ourcomes obeys precise probabilistic laws.

    So, what are you denying in the above reasoning?

    Are you denying that the movement of a physical object, and its tragectory, are determined by the laws of classical mechanics? Are you invoking quantum effects? Are you saying that scientific laws are meaningless?

    Please, be clear and precise, as I have tried to be.

    Otherwise, we can stop our discussion here.

  239. 239
    ET says:

    uncommon alves:

    At atomic level magnification since everything has complex atomic structure and “impossible” rotation of electrons in different orbitals (probability cloud) around nucleus, “impossible” existence of leptons and gluons, obviously everything on earth will be above the ID’s 500 bit threshold.

    So everything is an artifact? All deaths are murders? Really?

  240. 240

    uncommon alves @ 232: Your point might defeat the 500-bit threshold argument but it wouldn’t defeat the general argument for ID. Correct?

  241. 241
    gpuccio says:

    uncommon_avles at #232:

    So isn’t this 500 bit threshold just a farce ?

    No. Your “argument” is a farce.

    Of course, there is absolutely no functional information in electronic clouds. They are simply determined by the laws of physicis, in particular quantum mechanics.

    If you understood ID theory, you would know that all configurations that can be explained by necessity (known laws) are not valid specifications. That is quite clear in Dembski’s explanatory filter.

    See also my OP about functional information:

    Functional information defined

    https://uncommondescent.com/intelligent-design/functional-information-defined/

    The issue is debated in the discussion of that thread: #48, #57, #68, #135.

    Functional information is about the bits of specific information that are necessary to implement a function, and that are introduced into the object by setting “configurable switches”, IOWs configurations that are possible in the search space, but not constrained by known laws (see also Abel).

    Electronic clouds are not configurable switches: they are determined by laws, and they cannot implement any functional information.

  242. 242
    gpuccio says:

    Truth Will Set You Free:

    uncommon_avles has no point, and cannot defeat anything.

    See my comment #242.

  243. 243
    Nonlin.org says:

    OLV@235
    Look up “Stanford marshmallow experiment”

    OLV@236
    That’s the title of the OP from the Atlantic – see the link.

    Origens@238
    You stopped making any sense long time ago and I am tired of your nonsense. Maybe some other time.

  244. 244

    UA,

    My point is quite simple

    Your point is on top of your head. A “bit” is a binary digit; a unit of storage in a medium of digital information. If you start there, you may figure out why your “argument” falls apart the moment you make it.

  245. 245
    gpuccio says:

    Nonlin.org:

    The Stanford marshmallow experiment, and similar, are just experiments that measure personality traits. They are not measuring free will, but rather those already existing personality traits that constrain our free choices.

    There is no way for science to investigate free will.

  246. 246
    gpuccio says:

    Nonlin.org (and OLV):

    Excuse my intrusion, but OLV is right.

    The title from The Atlantic is:

    “How a Quarter of Cow DNA Came From Reptiles”,

    which is correct.

    Your title is:

    “Did a Quarter of Cow DNA Came From Reptiles?”

    which is wrong.

    Therefore, OLV’s friendly suggestion:

    “shouldn’t it be “come” instead of “came”?”

    is perfectly correct.

  247. 247

    gpuccio @ 243: That was good. A living legend you are.

  248. 248
    ET says:

    Your point is on top of your head.

    Question-begging. 😉

  249. 249
    gpuccio says:

    Truth Will Set You Free:

    Thank you, you are too kind! 🙂

    I just try to keep my ideas clear about ID theory. All the merits belong to the theory itself! 🙂

  250. 250
    Nonlin.org says:

    gpuccio@239,

    You’re just not getting the point on “height”, and it’s an irrelevant side argument anyway, so I’ll stop here.

    We study will power – see “Stanford marshmallow experiment”. How is that predictable behavior? You already replied but I don’t agree. How do you separate free will from personality traits? And how can you prove personality traits constrain free will?

    On determinism:
    1. Newtonian mechanics is an approximation – it doesn’t give you certainty and it doesn’t take into account the quantum effects. Yes, most of those quantum effects cancel each other for large objects, but your statements are 100% categorical and that’s not right as you just don’t know FOR SURE.
    2. We can send out space vehicles because they autocorrect their trajectory (negative feedback) and they require finite precision. Also look-up positive feedback. For purely positive feedback systems the output will always be at one unpredictable extreme or another regardless of how precisely the input is controlled.
    3. Look up chaos theory: “The butterfly effect presents an obvious challenge to prediction, since initial conditions for a system such as the weather can never be known to COMPLETE ACCURACY.”
    4. You dismissed my thought experiment without a proper explanation: “Say you have a double slit experiment and on the other side a number of scared rats that can see one photon (can they? Humans can) and take off in fear in different directions knocking down one domino set or another. That’s your quantum impact on macroscopic events.” You should review the double slit experiment. If you send a large number of photons, you will see the wave function on the screen. But if you send only a couple, you don’t know where they land on that wave function. And if you take a decision based on where these photons landed, you have your non-determinism.

    @236, @247 Right. I corrected the title. Thanks.

  251. 251
    gpuccio says:

    Nonlin.org:

    I have discussed my ideas about free will here at UD many times. I don’t know if it’s the case to start a vast discussion about that here with you.

    Your position about science is really unacceptable.

    Of course empirical science is an approximation. Whoever knows a little about philosophy of science is well aware of that.

    Science is not about knowing for sure. Therefore, your criticism that we don’t know for sure is completely irrelevant. Of course we don’t know for sure. Indeed, all human cognition is not about knowing for sure, including your personal ideas.

    But for science that is particularly true.

    And so? Science is extremely useful and powerfull, even if it “does not know for sure”. Knowing for sure is not a real requirement, unless we have personality problems. Science is about the best explanation, and best explanations are a really useful, precious thing. I stick to them, and I have never had reasons to complain.

    Complete accuracy is, of course, irrelevant. It is a myth, it cannot exist in the real world. Science is based on measurements, because only measurements allow us to make quantitative theories. But, of course, no measurement is ever completely accurate. Error is implicit in measurement.

    But that is not a problem, because error can be measured too, and that makes measurements reliable in their appropriate context, if error is small enough.

    Chaos theory is of no help to your position. From Wikipedia:

    “Chaos theory is a branch of mathematics focusing on the behavior of dynamical systems that are highly sensitive to initial conditions.”

    Chaotic systems are, of course, completely deterministic. It’s their special features, and the math that describes them, tha makes them “sensitive” to initial conditions: it’s because they are deterministic that small errors in the measurement of initial conditions make the outcome very different. They are practically unpredictable, but only because we know perfectly how to predict them, and we know very well the reason why small errors in the initial measurements will make the prediction very different. They are the demonstration of determinism. The appropriate math perfectly understands and describes them.

    And you still misunderstand the double slit experiment, and all quantum mechanics. When a photon is absorbed, be it at the screen or on the retina of your rats, it is absorbed as an individual particle, not as a wave. IOWs, it has position and other properties of an observed particle. It is no more a quantum wave function. Therefore, what’s the problem with your scenario? Rats of course react to the photon if it is there, on their retina, and don’t react if it is not there. this is classical physics. We are dealing with collapsed wave functions. There is no more probability, of any kind. The response of the rats is fully deterministic.

    However, if you have no new arguments, we can stop it here. As said, I hate to repeat the same things many times.

    Please, let me know if you are interested in discussing my model of free will.

  252. 252
    gpuccio says:

    Nonlin.org:

    And you have not answered about the dice model.

    I have described a system that is deterministic, and generates a probability distribution.

    My point is that the probabilistic aspect of the system derives obviously form the deteministic rules that govern its behaviour, and from the great number of independent variables.

    Are you denying that? Do you really believe that the probabilsitic aspect of the system derives from hidden quantum effects?

    Please, answer that. It’s a simple question.

  253. 253

    Nonlin, allow me to offer you some advice. Here is how you do this:

    “GP, thanks for the conversation. You’ve given me some things to think about. Take care”

  254. 254
    uncommon_avles says:

    GP @242
    Interesting! I thought ID considers atom as something supra-natural?
    quoted from this OP:

    The stability—indeed, the very existence—of the atom suggests something supra-natural. But since the materialistic worldview does not allow for that, its adherents were challenged to discover a mechanism by which atomic stability could be maintained. However, instead of making a discovery, they settled for coming up with a term, “quantum confinement,” which is a scientific label describing, rather than explaining, the phenomenon.

    If you have no problem in assuming QM explains atomic orbits, why would you assume biological processes are far more ‘complex’ at all? Electron orbit has CSI because it has to be placed in precise energy levels in order to avoid falling into the nucleus. The protons have to be of specific numbers in order to form an element. The protons also have to be bound by precise strong nuclear forces to ensure protons don’t repel and disintegrate the nucleus. Let us not even go into quarks.
    ID is based on an individual’s assumption of CSI in a process/ structure as is clear from the atom’s example. If you think some structure has CSI, you concoct some bits higher than 500, if not, you show bits below 500.

    UB @ 245

    Your point is on top of your head. A “bit” is a binary digit; a unit of storage in a medium of digital information. If you start there, you may figure out why your “argument” falls apart the moment you make it.

    It doesn’t matter what you call it. It is just the ratio of presumed probability of an event to 10^150. The presumption is what is under dispute.

  255. 255

    Mathematics won’t excuse you of the category error planted in the middle of your argument.

  256. 256
    OLV says:

    Nonlin.org (244):

    OLV@235
    Look up “Stanford marshmallow experiment”

    gpuccio (246):

    The Stanford marshmallow experiment, and similar, are just experiments that measure personality traits. They are not measuring free will, but rather those already existing personality traits that constrain our free choices.
    There is no way for science to investigate free will.

    Nonlin.org (244):

    OLV@236
    That’s the title of the OP from the Atlantic – see the link.

    gpuccio (247):

    Excuse my intrusion, but OLV is right.
    The title from The Atlantic is:
    “How a Quarter of Cow DNA Came From Reptiles”,
    which is correct.
    Your title is:
    “Did a Quarter of Cow DNA Came From Reptiles?”
    which is wrong.
    Therefore, OLV’s friendly suggestion:
    “shouldn’t it be “come” instead of “came”?”
    is perfectly correct.

  257. 257
    OLV says:

    Upright BiPed (254):

    Nonlin, allow me to offer you some advice. Here is how you do this:
    “GP, thanks for the conversation. You’ve given me some things to think about. Take care”

    That seems humble and prudent.

  258. 258
    OLV says:

    gpuccio,

    Thanks.

  259. 259
    gpuccio says:

    uncommon_avles at #255:

    Of course I have no problems at all in assuming, indeed in firmly believing, that QM explains atomic orbits. It explains them perfectly well, and with an extreme level of precision. I hope we agree at least on thas.

    That said, you ask:

    why would you assume biological processes are far more ‘complex’ at all?

    The point, as even you should have understood at this point, is not being generically “complex”, but exhibiting complex functional information. Is that so difficult to realize?

    Protein sequences are functionally complex, because they can implement a specific function by their specific sequence.

    The sequence of a protein is not dictatetd by any biochemical law in the biological world: it is dictated by the sequence of nucleotodes in the protein coding gene. The sequence of nucleotides, again, is not dictated by any biochemical law: the four nucleotides can exist in any order in DNA.

    IOWs, sequences (both of nucleotides in a protein coding gene and of AAs in a protein) are fully contingent. Each AA position or nucleotide position is a configurable switch. IOWs, each position can assume any of the 20 (for AAs) or 4 (for nucleotides) values that are available in the biological context. There is no biochemical law that can dictate the sequence. The sequence is merely informational.

    Therefore, if the sequence we observe is functional (as it is), we can compute a target space and a search space, and compute the specific functional information for that function.

    Can you see the difference with atomic orbits? Atomic orbits can only be those that the laws of QM dictate. They are as they are, they are quantum wave functions. Math describes them perfectly well.

    Protein sequences are contingent, and their functionality points to a design inference, exactly like the meaning of my words in this post, or the functionality of bits in a software code.

    Then you say this strange thing:

    ID is based on an individual’s assumption of CSI in a process/ structure as is clear from the atom’s example.

    Nonsense! ID is based on the objective computation of the functional complexity of an objectively observed function. There are no assumptions there. And there is no functional complexity at all in the atom example, as explained. Even you should be able to understand that.

    If you think some structure has CSI, you concoct some bits higher than 500, if not, you show bits below 500.

    This is simply a lie. And a very silly one.

    Then you say (to UB) this even stranger thing:

    It doesn’t matter what you call it. It is just the ratio of presumed probability of an event to 10^150. The presumption is what is under dispute.

    ????? What do you mean? Do you even understand what you are saying?

    “the ratio of presumed probability of an event to 10^150”?

    The only ratio in ID is the ratio of the target space to the search space. That ratio is the probability of finding a sequence in the target space. There is no presumption.

    And 10^150 is simply an appropriate threshold of complexity for -log2 of that ratio. It is not part of the ratio itself.

    But I suppose that, at this point, you have lost any reasonable credibility.

  260. 260
    uncommon_avles says:

    gp @ 260

    The point, as even you should have understood at this point, is not being generically “complex”, but exhibiting complex functional information. Is that so difficult to realize?
    Protein sequences are functionally complex, because they can implement a specific function by their specific sequence.

    It seems ID is restricted only to protein sequences and not to any other events showing CSI:-)

    Nonsense! ID is based on the objective computation of the functional complexity of an objectively observed function.

    Which is precisely what I am challenging. You cannot objectively compute the CSI because the probability of an event/ process (say protein folding) cannot be described as a ratio at all. You need the probability density function with shape, scale and location parameters. Eg if the pdf of an event is presumed to be “Generalized Gamma”, you need to know not just that it is general gamma but also the shape parameters (k, alpha), the scale parameter (beta) and the location parameter ( gamma).

    ????? What do you mean? Do you even understand what you are saying?

    10^150 is the universal probability bound, that is where you get the 500 bit threshold from. -Log2[10^-150]=489.28 , More precisely, it is - Log2[3.05x10^-151]=500. Unless you are restricting the CSI to protein sequence and related events alone, probability has to be presumed because processes and events in cell are stochastic.

  261. 261
    ET says:

    uncommon alves:

    It seems ID is restricted only to protein sequences and not to any other events showing CSI

    Then you are ignorant of ID. That means you need to go and educate yourself and then come back to discuss it.

    And there isn’t any evidence that all processes and events in cells are stochastic. If there were we wouldn’t be talking about ID.

  262. 262
    kairosfocus says:

    UA,

    the D/RNA based info-comms system and linked protein synthesis sit at the root of cell based life, and is the heart of the system of the cell as proteins are its workhorse molecules and key technology.

    This system uses alphanumeric, framed codes — already, this is language antecedent to and a key causal factor in cell based life — with start/stop, regulation, interwoven codes, splicing systems and more. In addition, codes for proteins come in deeply isolated clusters in AA sequence space (much less the wider space of C-chemistry!), leading directly to the needle in haystack, islands of function phenomenon. Thus, deep search challenge.

    For, in many cases, we can readily show that the complexity involved exceeds 500 – 1,000 bits.

    That threshold is key as at the two ends, we can readily show that the other known source of highly contingent outcomes apart from intelligent action, chance, is impotent to search enough of a configuration space of that scope to be more than an appeal to statistical miracle in the teeth of a readily demonstrated alternative: intelligent, purposefully directed configuration. As we see from the text of your and my comments in this thread.

    As has already been outlined in-thread, fast organic rxn rates make 10^12 – 14/s a maximum plausible observation rate. 10^17 s is of the order of time since singularity on the usual timeline. Sol sys is ~ 10^57 atoms, mostly H but we can ignore that point of generosity. Likewise, observed cosmos is ~ 10^80 atoms, mostly H then He. Give that many sol system atoms each a tray of 500 coins, flipped every 10^-14 s and use that as a search model. Likewise for cosmos, use 1000 coins each. Or, if you want something more “scientific,” try that many atoms of a paramagnetic substance in a weak B field with parallel and antiparallel states.

    This is a simple model giving state spaces of 3.27*10^150 to 1.07*10^301 possibilities. Add the indices and you see: [a] 57 + 14 + 17 => !0^88 possible observations, a factor of 10^-62 of the space for 500 bits. For the observed cosmos 80 + 14 + 17 => 10^111 possible observations, a factor of 10^-190 of the space for 1,000 bits.

    Islands of function are thus patently empirically unobservable on blind chance processes, as search possibility rounds down to effectively no search in both cases.

    You may suggest that there are laws that write in cell based life in terrestrial planets in habitable zones. Fine, you just added a huge quantum of fine tuning to the already formidable cosmological design inference.

    Or, you may wish to posit a quasi-infinite multiverse.

    Fine, that then runs into Leslie’s deeply isolated fly swatted by a bullet — LOCAL fine tuning is just as wondrous as global, pointing to a sharpshooter with a tack-driver of a fine tuned rifle. Matters not that some zones on the wall are positively carpeted with flies, what we observe on the logical structure and quantity of physics is not plausible on a blind multiverse hyp. (We SHOULD be seeing a Boltzmann brain world or the like.)

    The design inference on C Chem, aqueous medium, code using cell based life in a fine tuning world is quite robust, thank you.

    Regardless of ideologically loaded dismissive rhetoric.

    And, it puts design in both the world of life from the root up and in the cosmos from the root of reality up.

    That is what advocates of self-referential, self-falsifying evolutionary materialistic scientism and fellow travellers (panpsychism being the latest to pop up here at UD) face.

    With those sorts of alternatives on the table, the design inference is a no-brainer, no sweat choice.

    KF

  263. 263
    gpuccio says:

    uncommon_avles at #261:

    OK, let’s spend our time this way. I hope that someone can benefit from the clarifications. Not you, probably, given your attitude.

    It seems ID is restricted only to protein sequences and not to any other events showing CSI:-)

    Not at all. I discuss protein sequences, because functional information is easier to measure in them. If you want to compute functional information for other types of structures, be my guest.

    ID can be applied to any object exhibiting functional information, but of course the difficulties in measuring it are different according to the type of object and context.

    Which is precisely what I am challenging. You cannot objectively compute the CSI because the probability of an event/ process (say protein folding) cannot be described as a ratio at all. You need the probability density function with shape, scale and location parameters. Eg if the pdf of an event is presumed to be “Generalized Gamma”, you need to know not just that it is general gamma but also the shape parameters (k, alpha), the scale parameter (beta) and the location parameter ( gamma).

    Again, nonsense. Protein folding has nothing to do with the reasoning here. Nor are we computing the probability of the function.

    What we are computing is the ratio between the target space and the search space, IOWs the probability of finding a sequence that can implement the function by a random walk in the search space of possible sequences. It’s all another thing.

    The target space is the set of the sequences of a certain length that can implement the function as defined. The search space is the total number of sequences of that length that can potentially be reached in the system.

    The function is only used to generate a binary partition in the search space: sequences that can implement it, and sequences that cannot implement it.

    Teh search is simply a random walk in the search space of sequences. It has nothing to do with the function, because it is a blind random walk. All unrelated sequences have essentially the same probability to be reached, and therefore an uniform distribution can be assumed. Even if the ditribution is not perfectly uniform, the important point is that the distribution has nothing to do with the function, because it is the distribution of the results of a random walk in a sequence space. IOWs, there exists no distribution that can favor some specifically functional sequence, because the search space has absolutely no information about the function.

    This is already obvious for the protein sequence space, but it becomes absolutely obvious, beyond any possible doubt, if you consider that the real space where the random walk takes place is the space of nucleotide sequences, that of course can never have any information about protein functionality, because it is only symbolically related to protein sequences, and of course the random walk of random mutations at the level of DNA has absolutely no information about that.

    So, your rambling about probability distributions has no meaning at all.

    10^150 is the universal probability bound, that is where you get the 500 bit threshold from. -Log2[10^-150]=489.28 , More precisely, it is – Log2[3.05×10^-151]=500.

    I know very well what the UPB is. But you had said, literally:

    “It doesn’t matter what you call it. It is just the ratio of presumed probability of an event to 10^150.”

    which has no meaning at all.

    The ration of the target space to the search space is the probability, which becomes the FI if expressed as -log2.

    10^150 is a threshold that we can use to categorize FI as a binary variable (complex: yes/no).

    Using a threshold to categorize a numerical variable is not a ratio. Your statement was simply wrong and meaningless, and I have corrected it.

    Unless you are restricting the CSI to protein sequence and related events alone, probability has to be presumed because processes and events in cell are stochastic.

    This is really beyond any understanding. What do you mean?

    I suspect that you really understand nothing of ID theory. Indeed, your arrogance has the distinct flavour of ignorance.

    I apply functional information to protein sequences, as explained. It can potentially be applied to any object, biological or not.

    Probability must always be assessed, not presumed. It can be usually impossible to measure probability exactly, but it can often be measured indirectly, by approximation. That is the case for protein sequences, where conservation through long evolutionary times allows us to estimate functional constraints.

    The probability dostribution in the sequence space can also be estimated realistically (see the considerations above).

    The only thing that cannot be understood realistically seems to be your statement.

  264. 264
    kairosfocus says:

    GP, pardon but my check is 2^500 = 3.27*10^150. KF

  265. 265
    gpuccio says:

    KF:

    You are right, of course! 🙂

  266. 266
    uncommon_avles says:

    KF@265 AND GP @ 266
    Heh. I will go through your other replies, meanwhile Please make up your mind – is bits threshold +500 or
    -500?!!

    -Log2[3.27*10^150]= -499.99. If you try to make it positive with -Log2[3.27*10^-150]= -Log2[3.27/10^150] = 496.58.
    The correct answer is what I posted above @261
    – Log2[3.05×10^-151]=500.

  267. 267
    kairosfocus says:

    UA, surely, you recognise rounding issues? I have given rounded values. Calling back up my HP50, the direct exponent and log calc give to 4 places 2^500 = 3.2734 *10^150. 15 are available in principle if you want but the point should be clear. I add, log of a number beyond 1 will be positive, and I reported the actual rounded scope of a config space for 500 bits. Also, for 1,000 bits, 1.07*10^301 possibilities is rounded. KF

  268. 268
    gpuccio says:

    uncommon_avles at #267:

    I think you are right on this point. Good. 🙂

  269. 269

    Against my better judgement I’ll make another comment on this thread.

    Electron orbit has CSI because it has to be placed in precise energy levels in order to avoid falling into the nucleus. The protons have to be of specific numbers in order to form an element. The protons also have to be bound by precise strong nuclear forces to ensure protons don’t repel and disintegrate the nucleus.

    You think an atom “has CSI” because it has to have precise energies and constituents in order to be what it is? That would surely help to explain your confusion on the subject.

    You might want to keep in mind that when you make statements about a “500 bit threshold” you are talking about the measurement of a description (a specification) encoded in a medium of information. The ‘energies and constituents’ of an atom is not a description of the energies and constituents of an atom. You are making an (anthropocentric) category error; conflating form with information.

    It seems clear to me that you intend to dig in your heals and fight to keep the error. Nothing can be done about that.

  270. 270
    Nonlin.org says:

    gpuccio@252 / 253

    No doubt science is not about knowing for sure. But your “completely deterministic” claim is extreme and not adequately supported in my opinion. Here is wisdom from a guy that uttered stupidities most of the time: “Extraordinary claims require extraordinary evidence”. We don’t have to agree on this.

    The double slit experiment shows determinism to fail. Think about it: you set up a perfectly deterministic configuration and do the experiment once with particle A ending up at Position A. Then you repeat the experiment with particle B ending up at Position B. Nothing changed in your 100% deterministic setup yet every time you repeat the experiment you don’t know (except statistically) where your particle will end up even if you calibrate your setup to the n-th degree.

    Double slit is totally different than your normal distribution of outputs in a manufacturing plant (your dice model) where tightening the inputs / set-up results in a tighter output distribution with the theoretical conclusion that perfect inputs / set-up will result in perfect outputs (determinism). The probabilistic aspect of these systems should theoretically come from hidden quantum effects, but for real life setups they come from inputs / set-up variability (chaos theory). Not quite what I was looking for: https://www.schneier.com/blog/archives/2009/08/non-randomness.html and https://www.insidescience.org/news/dice-rolls-are-not-completely-random . And I am pretty sure that’s why we end up with normal distribution all the time: because you have big contributors and small contributors to variability.

  271. 271
    Nonlin.org says:

    OLV@257 and Upright BiPed@254

    I hope we’re all here to learn from each other.

    Atlantic OP was already acknowledged.

    gpuccio@252 / 253

    The fact that the double slit waveform is not the normal distribution should tell you this is different than your deterministic system.

  272. 272
    kairosfocus says:

    UB [attn UA], electron orbitals are matters of natural law — and at cosmological level, fine tuning may be a relevant issue. Such is of course exactly what functionally specific complex organisation and/or associated information is about. Chance-driven stochastic processes may be strictly deterministic but sensitive to a host of uncontrolled factors giving rise to random patterns; I think here of tossing a die that tumbles and settles. Such may also be random in principle as seems to happen with various quantum-linked phenomena like zener noise of sky noise. Chance processes are distributed in config spaces, which statistical thermodynamics tells us will be dominated by relative statistical weights of clusters of microstates. Under such conditions, though high contingency is involved, for complex systems on the scales discussed, the practical observability of FSCO/I on blind chance and/or mechanical necessity is effectively nil. This is due to relative statistical weights of clusters and the predominant group phenomenon behind for instance equilibrium and the statistical form of the second law of thermodynamics, etc. As comments in this thread show FSCO/I as coded textual information (here a linguistic phenomenon) is readily produced by intelligently directed configuration. The observation base is beyond a trillion. That is there is a highly reliable inference from FSCO/I to design as key causal factor. KF

  273. 273
    OLV says:

    This seems old and the author unknown, but it’s interesting how it argues against ID:

    Biological Complexity

  274. 274
    gpuccio says:

    Nonlin.org:

    Let’s see if we can find some common ground. I will do my best to clarify my position better:

    a) We seem to agree that science “is not about knowing for sure”. That’s good. But I would like to be sure that you agree with me that science is valid and important and useful, even is it is not about knowing for sure, indeed even more for that.

    b) I have never said that reality is “completely deterministic”. You misunderstand me.

    I agree that quantum reality has a probabilistic component, and I believe that such a component is intrinsically probabilistic.

    Moreover, I believe that consciusness and free will are independent components of reality, and that they interact with physical reality, probably by consciously harnessing the rpobability component of a specific quantum interface. That certainly happens in humans, and it’s my favourite model to explain design in the biological reality.

    So, my model of reality is not “completely deterministic”. It allows a well defined space for intrinsic probability (quantum events) and for conscious and free interventions, which are neither deterministic nor probabilistic.

    c) That said, I think that almost all the events that science studies at the macroscopic level, those that are well described by classical physics, are deterministic, in the sense that they can be very well described in terms of necessity (the laws of classical physics), or in other cases in terms of non intrinsic probability (that kind of porbailistic description that is only a way to describe deterministic systems with many independent and not known variables, like in the case of dice or of polygenic traits.

    d) In all those cases, the scientific approach has no need to consider quantum effects, because they are irrelevant. The biochemical level is, in most cases, a deterministic scenario that has nothing to gain from considering quantum effects. But there are probably important exceptions. Photosynthesis is one of them, and there could be other biochemical scenarios where quantum effects are important. Certainly, they could be important at the level of neurons and synapses, where the interface with consciousness reasonably is to be found.

    e) In general, events at the level of subatomic particles are dictated by quantum mechanics, and macroscopic events are best described in terms of classical physics. As already said, there are exceptions, but exceptions do not change the simple fact that in most cases the scientific approach must be appropriate for the scenario we are describing. Again, science is precious because it is not about knowing for sure. But it is about knowing, definitely.

    f) That said, my arguments for design remain absolutely valid. Design is not determinism. It is the demonstrattion of the action of consciousness upon reality, and consciousness is neither deterministic nor probabilistic. Design is free will acting at the cognitive level, infusing matter with meaningful and intentional configurations.

    My main objection to your arguments is that I don’t accept your confusion about design and determinism. They are two completely different things.

    g) Inferring design is done by recognizing complex functional information in objects. That allows us to detect design, but only if and when it is detectable.

    Functional information is a key concept, and it relies on detecting target spaces that are contingent, functional and highly unlikely in a system where only deterministic and/or probabilistic processes (either non intrinsic or intrinsic) are acting.

    The recognition of contingent configurations (IOWs configurations that cannot be explained by deterministic laws) that are functional and complex (IOWs that are completely unlikely as a result of random effects due to many independent hidden variables, or even to the intrinsic probability of quantum mechanics) are safe markers of design: the meaningful, intentional intervention of consciousness on matter.

    OK, that’s a summary of my position, as clear as I can make it. Whatever your comments, please make your position equally clear.

  275. 275
    kairosfocus says:

    OLV,

    Let’s look at the opening para of that NCSE propaganda piece — and yes, this is a known advocacy group fully meriting that description:

    The origin of biological complexity is not yet fully explained, but several plausible naturalistic scenarios have been advanced to account for this complexity. “Intelligent design” (ID) advocates, however, contend that only the actions of an “intelligent agent” can generate the information content and complexity observed in biological systems.

    Let’s take it in stages, pointwise:

    >>The origin of biological complexity is not yet fully explained,>>

    1: confession that they do not have an actual, viable, empirically, observationally justified account of how FSCO/I in living systems came about by demonstrated result of blind chance and/or mechanical necessity.

    2: Had they had such, they would trumpet it, and there would be no biological ID case or movement. As, the design inference explanatory filter would be broken.

    3: So, the way this begins gives away the game, they intend to impose methodological naturalism and ideologically lock out the only known, empirically grounded and search challenge plausible causal origin of FSCO/I.

    4: Namely, design, or intelligently directed configuration; for which there is a trillion member observational base.

    5: Note, this includes alphanumeric string based codes and associated communication and cybernetic systems, which are at the heart of cell based life.

    >> but several plausible naturalistic scenarios>>

    6: Which are just-so stories without empirical warrant, or they would have been triumphantly announced as demonstrated fact rather than “plausible naturalISTIC — a clue on ideological imposition — scenarios.”

    7: Plausible, once the actually empirically founded source of FSCO/I has been locked out.

    >> have been advanced to account for this complexity.>>

    8: Scenarios imposed in the teeth of empirical evidence.

    >>“Intelligent design” (ID) advocates, however, contend that only the actions of an “intelligent agent” can generate the information content and complexity>>

    9: notice, scare quotes and dismissal as advocates rather than qualified scientists and scholars in their own right who are backed by empirical evidence on the origin of FSCO/I and analysis on the needle in haystack search challenge.

    10: Notice, the loaded “can,” where the trillion member observation base shows that the only, frequently, observed cause of FSCO/I is intelligently directed configuration, the act of intelligent agents. As NSCE exemplifies in its text.

    >> observed in biological systems.>>

    11: So, from D/RNA and associated cellular execution machinery on up, FSCO/I is observed in living cell based life forms.

    12: No actually empirically warranted case of blind chance and necessity creating it is on the table; while, a trillion member base and linked analysis shows that intelligent design can and does create FSCO/I.

    13: In the case of D/RNA and linked execution systems, these had to be in place BEFORE you could have protein synthesising, self-replicating cells.

    14: this is the province of physical and information sciences, including especially statistical thermodynamics, physics and chemistry.

    15: These clearly point to one empirically warranted, needle in haystack plausible cause, intelligently directed configuration acting at the origin of cell based life as we know it.

    16: But of course, that is ideologically locked out.

    KF

  276. 276
    OLV says:

    KF,

    Excellent review. Thanks.

  277. 277
    bill cole says:

    gpuccio

    Here is a comment from Rumraket to Mung at TSZ. Would love to hear your thoughts.

    If I give you the DNA sequence for a protein coding gene that is known to be functional in some organism, will you do us a favor and calculate how much functional information that gene constitutes? Then we can proceed to analyze whether mutations that affect that function has an effect on the amount of information in the gene. Deal?

  278. 278
    OLV says:

    bill cole:

    Perhaps the port-transcriptional modifications that lead to the mature mRNA make any reference to single genes seem a little vague or imprecise.

    Maybe I’m wrong, but I think gpuccio deals with BLASTing actual protein sequences. However he can clarify this better.

    You may kindly ask your interlocutor to tell you what they think of this:

    What is a gene?

    What’s the current status of the neo_darwinian theory?

    Please, note that the sources of the above links are not ID-friendly.

  279. 279
    OLV says:

    OLV (279):

    Error correction:

    “Perhaps the post-transcriptional modifications…”

  280. 280
  281. 281
    uncommon_avles says:

    Upright BiPed @ 270

    You might want to keep in mind that when you make statements about a “500 bit threshold” you are talking about the measurement of a description (a specification) encoded in a medium of information. The ‘energies and constituents’ of an atom is not a description of the energies and constituents of an atom. You are making an (anthropocentric) category error; conflating form with information.

    The flagella’s structure is used by ID to show complexity. Isn’t that conflating form with information ?
    You seem to think that information (bits) is something different than probability of structure/pattern/ event/process. It is not. It is simply the -Log2 of probability. Log is used for convenience, as probability is multiplicative while Log is additive. Negative Log is used to assign more ‘information’ to less probability. At the end of it all, ‘information’ is just the -Log2 of probability.

  282. 282
    OLV says:

    uncommon_avles,

    I think the ID folks associate Functional Specified Complex Organization with Functional Specified Information, but they don’t always quantify it.
    However, gpuccio can explain this better.

  283. 283

    ua,

    You seem to think that information (bits) is something different than probability of structure/pattern/ event/process.

    What an utterly useless conceptualization. I said upfront that commenting to you again was against my better judgment. I should have listened.

    You entered this thread arguing that a “500 bit” informational threshold was a “farce” because at the atomic level (i.e. the specificity of energies and constituents within an atom) everything on earth is above the threshold. I pointed out that you are conflating the measurement of an encoded medium with something that isn’t a medium to begin with, and thus, doesn’t encode any information. You didn’t address that contradiction.

  284. 284
    gpuccio says:

    bill cole at #278:

    I don’t know what Rumracket’s point is, if he really has a point.

    However, the answer is easy enough.

    That’s what I would do.

    I would translate the nucletide sequence and get the AA sequence of the protein.

    Then I would BLAST it, and reconstruct its evolutionary history. If the protein has a human form, I could use my procedure to evaluate human conserved information, but of course conserved information can be evaluated along any approipriate evolutionary line of descent.

    If the protein exhibits conserved information through some long enough evolutionary separation, let’s say 400 million years, I would simply consider the level of conserved information as given by the BLAST bitscore as a reliable measure of its functional information. If the bitscore is above 500 bits, I would definitely infer design for that functional information.

    Of course, there is always the direct approach. The protein can be studied in the lab, and mutational studies can be implemented, and someone could dedicate his own life to research the relationship between sequence and function for that protein. With all the obvious limitations of that direct approach. Maybe Rumracket could finance the research.

    An important point: if there is not enough information from the evolutionary history of the protein about its function and functional conservation for long evolutionary times, I would simply not make any design inference for it.

    It’s simple enough.

  285. 285
    uncommon_avles says:

    Upright BiPed @ 284
    This is what you said @ 270
    You might want to keep in mind that when you make statements about a “500 bit threshold” you are talking about the measurement of a description (a specification) encoded in a medium of information. The ‘energies and constituents’ of an atom is not a description of the energies and constituents of an atom. You are making an (anthropocentric) category error; conflating form with information.
    and I have answered thus:

    The flagella’s structure is used by ID to show complexity. Isn’t that conflating form with information ?

    You are the one who is choosing not to address that contradiction. If you can be clear about what is the relation between probability, bit and measurement of an ‘encoded medium'(whatever that is), I can attempt to answer it.

  286. 286
    gpuccio says:

    uncommon_avles at #282:

    The flagella’s structure is used by ID to show an example of irreducible complexity.

    Of course, each component of the bacterial flagellum is probably functionally complex.

    The point of IC is that, if there is an irreducible core for a function, then the functional complexity must be computed for that core, because the function is implemented only if all the individual components of the core are present. Therefore, in that particular case, we have to multiply the probabilities (and therefore to sum the functional complexity) of the components.

    I will give you a very simple example of IC: the alpha and beta chains of ATP synthase.

    Those two chains form a structure (the F1 hexamer) that is irreducibly complex. No single chain can implement the function, both are necessary.

    Therefore, their functional complexity (which is, as estimated by the e. coli – humans conservation, 561 bits for the alpha chian and 663 bits for the beat chain) must be summed, and the functional complexity of the F1 hexamer becomes 1224 bits, because of the IC of the structure.

  287. 287
    gpuccio says:

    uncommon_avles at #286:

    Nobody is conflating form with information. It’s not form that counts, but function. And, in particular, functional information, the specific contingent bits necessary to implement that function.

    See my comment #287.

    The same reasoning that I have shown for the hexameric component of the F1 part of ATP synthase can be applied to the flagellum.

    It’s not the form, but what the form can do because of its specific, contingent configuration.

  288. 288
    uncommon_avles says:

    gpuccio @ 287,288
    Ok. The concept is more clear to me now.Thanks

  289. 289
    gpuccio says:

    uncommon_avles:

    Thanks to you! 🙂

  290. 290
    kairosfocus says:

    UA,

    When form is functionally specific, grounded in particular arrangements and coupling of parts to make a working whole, that “form” is inFORMational in the sense of functionally specific, complex organisation and associated information. That is, the form is informational.

    This is readily seen from something I have routinely pointed to. Namely, how some reasonable and effective description language may specify the requisite form as a structured sequence of answers to Y/N Q’s, AutoCAD being a capital example.

    Nor is this insight a dubious notion of “those IDiots” or the like. In the specific context of functional forms found in the world of life, it was put on the table across the 1970’s by Orgel and Wicken. In fact, that was documented as part of the formative influence behind the original ID works, e.g. Thaxton et al in TMLO, c 1984.

    On irreducibly complex cores of functionally specific structures, the just outlined obviously strongly applies. Indeed, IC is one particular manifestation of FSCO/I. It is fairly common as anyone familiar with the need to have the right car part, properly installed, can testify to.

    When it comes to typical attempts to dismiss the significance of IC, it should first be noted that knockout gene studies commonly used to identify function work by disabling functional wholes by blocking a relevant, targetted part. So, the rhetoric of dismissal distracts from a highly material fact: IC is known to be common in biology to the point of being exploited experimentally to draw conclusions on gene function. Nor is this a novelty in this context, in the notoriously badly ruled Dover trial, Scott Minnich reported as an expert witness on how such studies were applied to the flagellum. The significance of that was of course suppressed in the ruling and in the reporting. That news reporting is demonstrably a case of agit-prop media trumpeting to push an ideologically loaded narrative regardless of credible countering facts. (For a current case in point on such media bankruptcy on a story, kindly see here — note the date on the report.)

    Going beyond, the commonly encountered exaptation argument fails, exploiting failure to connect dots on why IC exists.

    Menuge’s five criteria apply:

    IC is a barrier to the usual suggested counter-argument, co-option or exaptation based on a conveniently available cluster of existing or duplicated parts. For instance, Angus Menuge has noted that:

    For a working [bacterial] flagellum to be built by exaptation, the five following conditions would all have to be met:

    C1: Availability. Among the parts available for recruitment to form the flagellum, there would need to be ones capable of performing the highly specialized tasks of paddle, rotor, and motor, even though all of these items serve some other function or no function.

    C2: Synchronization. The availability of these parts would have to be synchronized so that at some point, either individually or in combination, they are all available at the same time.

    C3: Localization. The selected parts must all be made available at the same ‘construction site,’ perhaps not simultaneously but certainly at the time they are needed.

    C4: Coordination. The parts must be coordinated in just the right way: even if all of the parts of a flagellum are available at the right time, it is clear that the majority of ways of assembling them will be non-functional or irrelevant.

    C5: Interface compatibility. The parts must be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’: even if a paddle, rotor, and motor are put together in the right order, they also need to interface correctly.

    ( Agents Under Fire: Materialism and the Rationality of Science, pgs. 104-105 (Rowman & Littlefield, 2004). HT: ENV.)

    In short, the co-ordinated and functional organisation of a complex system is itself a factor that needs credible explanation.

    However, as Luskin notes for the iconic flagellum, “Those who purport to explain flagellar evolution almost always only address C1 and ignore C2-C5.” [ENV.]

    KF

  291. 291
    kairosfocus says:

    PS: Orgel, 1973:

    living organisms are distinguished by theirspecified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . . .

    [HT, Mung, fr. p. 190 & 196:]

    These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure.

    [–> this is of course equivalent to the string of yes/no questions required to specify the relevant J S Wicken “wiring diagram” for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002, also cf here,

    here and

    here

    — (with here on self-moved agents as designing causes).]

    One can see intuitively that many instructions are needed to specify a complex structure. [–> so if the q’s to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions.  [–> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes [–> Orgel had high hopes for what Chem evo and body-plan evo could do by way of info generation beyond the FSCO/I threshold, 500 – 1,000 bits.] [The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196.]

  292. 292
    gpuccio says:

    OLV at #283:

    “I think the ID folks associate Functional Specified Complex Organization with Functional Specified Information, but they don’t always quantify it.”

    They are two names for the same concept.

    Of course, we only quantify functional information when it is empirically possible to do it.

  293. 293
    OLV says:

    gpuccio,

    That makes sense. Thanks.

  294. 294
    ET says:

    uncommon alves:

    The flagella’s structure is used by ID to show complexity.

    That is false. The flagella’s structure is used by ID to show specified and irreducible complexity. Huge difference.

    Isn’t that conflating form with information ?

    It takes information to get the correct sequences for the proteins. And it takes information to assemble those proteins into the proper configuration.

    Francis Crick said that:

    Information means here the precise determination of sequence, either of bases in the nucleic acid or on amino acid residues in the protein.

  295. 295
    OLV says:

    OLV (281):

    Does this relate to semiosis too?

    Deciphering the reading of the genetic code by near-cognate tRNA

    PDF

    Please, would somebody comment on this too?

    Rooted tRNAomes and evolution of the genetic code

    BTW, note other tRNA-related papers referenced in the same webpage.

    Specially interested in reading Upright BiPed’s comments, because I like his interesting website “Biosemiosis“, but also would like to read comments from gpuccio and kairosfocus. Obviously, other commenters are also welcome.

    Thanks.

  296. 296
    ET says:

    Bill Cole @ 278-

    Ask Rumraket how it determined that the sequence provided evolved by means of blind and mindless processes. That’s the point.

    It is also about blind and mindless processes adding information and not just mutations. For all we know the mutations are part of the design of the organisms.

    They talk about gene duplication adding information but two copies of the same book is not more information.

    Also what they need to do and cannot is demonstrate that gene duplication followed by changes that make it code for a different protein was accomplished by blind and mindless processes. And the newly duplicated gene needs a new binding site before it can be expressed. That means their position has more issues to explain but cannot.

  297. 297
    gpuccio says:

    OLV at # 296:

    The paper about near-cognate tRNAs is interesting and very complex. It is esentially about control, flexibility and stability of the translation process, and the role of modifications in the tRNA and ribosome. It is interesting, and I need some time to read it in detail. However, at present it is probably not so relevant to our discussion.

    The paper about the evolution of tRNAs is much less interesting, IMO. While the evolution of tRNAs can be interesting, its possible connections with hypothetical models of evolution of the genetic code seem to be pure imagination.

  298. 298
    Mung says:

    uncommon_avles @ 286:

    So you admit to committing the tu quoque fallacy.

  299. 299
    OLV says:

    gpuccio (298):

    Your comment sufficiently satisfies my curiosity about those two papers at this point.

    Thanks.

  300. 300
    bill cole says:

    gpuccio
    Would you mind doing a work up of how you calculate the information content of ATP synthase and the Prp8 gene. A request came from TSZ.

  301. 301
    Nonlin.org says:

    gpuccio@275

    a. Agree, but you probably mean ‘the scientific method’ because ‘science’ already means ‘knowledge’ from Latin.
    b. Agree
    c. Yes, we try to understand the deterministic component, but why is restricting research so important to you? In a first phase we try to just clarify a phenomenon without even worrying about causes. Example: “is there such thing as black matter, and if so what properties does it have?”
    d. Probably on quantum effects. And even if we tried, studying quantum effects might not be feasible.
    e. Micro and macro are not separate worlds. The micro impacts the macro for sure. I don’t think the scientific method should be restricted to classical physics. No one can enforce such a restriction anyway.
    f. Your definition for design is cumbersome, full of unclear concepts and untestable – how can you measure meaning and intentionality? How is design related to determinism? See definition of determinism: “the doctrine that all events, including human action, are ultimately determined by causes external to the will.” Are you proponent of “Predestination”?!? Did I link design and determinism? Don’t think so. All I said was that design is indistinguishable from “law”.
    g. Not at all clear. My objection here was that function depends on an agent which we don’t see. And complexity also seems dependent and function hence on the agent. And of course, same object can have different functions for different agents (example: the family computer that even the cat can play with).

  302. 302
    bill cole says:

    ET

    All this is true. Gpuccio is claiming a limit of 500 bits for blind and unguided processes. I think he has a very good argument and I would like to run as hard with it as possible.

    Joe F had an argument for natural selection adding 500 bits of information to the genome which was deeply flawed and I think he knows it at this point. Gpuccio did an excellent job of pointing out the weaknesses in Joe’s argument.

  303. 303
    gpuccio says:

    bill cole:

    What do they want to know? I have explained it myriads of times.

    As said, I use conservation through long evolutionary times to measure functional constraint.

    The alpha and beta chains of ATP synthase have been hughly conserved for maybe billions of years. That is a very high evolutionary time.

    The bitscore between the human form and the bacterial form is a very good measure of how much the sequence is conserved, and therefore of its functional constraint. It expresses the probability of finding that level of homology by chance (Indeed, the probability is the E value, which is directly related to the bitscore. Unfortunately, the E-value is set to 0 when it becomes lower than some threshold, and therefore cannot be used as a measure for the high levels of functional information we are discussing here).

    Therefore, the bitscore is an indirect measure of the functional information in the sequence.

    As said, the two chains in ATP synthase have more than 1000 bits of functional information, as evaluated by the bitscore between bacteria and humans.

    As explained many times, these two sequences have been exposed to at least 1-2 billion years of neutral variation since the split of bacteria from human lineage. More than enough to change all that could change. 400 million years are more than enough for that, as many times debated. 1-2 billion years are really much more than enough.

    This is just the essence of the reasoning. Again, I don’t know what they really are asking for.

  304. 304
    bill cole says:

    gpuccio
    Thank you. I have a question but I will wait for a reply from TSZ.

  305. 305
    ET says:

    Bill Cole- The TSZ ilk love to equivocate. They flat out refuse to understand that Intelligent Design is NOT anti-evolution even though it has been explained to them. They are a truly pathetic lot and a total waste of time.

  306. 306
    gpuccio says:

    Nonlin.org:

    c) I am not restricting anything. My point is that in a scientific approach, we apply the methodology that is appropriate for what we are studying, according to the facts we know. My statement was:

    “That said, I think that almost all the events that science studies at the macroscopic level, those that are well described by classical physics, are deterministic, in the sense that they can be very well described in terms of necessity (the laws of classical physics), or in other cases in terms of non intrinsic probability (that kind of porbailistic description that is only a way to describe deterministic systems with many independent and not known variables, like in the case of dice or of polygenic traits.”

    IOWs, we use classical physics and classical probability for these scenarios, becasue we know very well that they explain these scenarios satisfactorily.

    Quantum mechanics was developed to explain facts that could not be explained by classical physics. It explains those facts very well, and it must be used to explain those facts, not those that are well explained by classical physics.

    Dark matter and dark energy are examples of facts that are not well explained by what we know. Therefore, it i absolutely correct to look for other explanations.

    But there is no reason to look for new explanations for the trajectory of a macroscopic object like a die, when subject to known forces of classical mechanics. We already know whow to descrbe that scenario satisfactorily.

    d) I am confident that we will be able to study qunatum effects wherever they are relevant, including biological scenarios. Quantume mechanics is another form of regularity, even if different from the regularities of classical physcis.

    e) I agree that micro and macro are not completely separate, and indeed the separation between classical scenarios and quantum scenarios is still not really understood. Certainly, it is not only a question of big and small.

    But big and small do count. It is a fact that most of macro-events can be well described by classical physics, and subatomic events require qunatum theory as a default. So, whenever we are studying scenarios for which we know well what to use, we can confidently use what works.

    I have never said that the “scientific method should be restricted to classical physics”. You are really misundrestanding what I think. Both classical physics and quantum mechanics are very good applications of the scientific method, and both work perfectly if applied in the right contexts. Both are theories about mathematical regularities that can explain facts. They are, of course, different theories.

    f) You say:

    “Your definition for design is cumbersome, full of unclear concepts and untestable – how can you measure meaning and intentionality?”

    I have never tried to measure meaning and intentionality. I just recognize that they exist, that they are observable subjective experiences.

    We know that we experience and use the personal intuition of meaning when we design something, and we also experience and use the personal experience of desire and intention.

    What I measure are the results of the conscious process of experiencing meaning amd purpose when they originate a design process: complex functional information, an objective property of objects that empirically is known to derive only from a cosncious and intentional intelligent design.

    My definition of design is clear, and in no way cumbersome and untestable. You can find it in my first OP here:

    Defining Design

    https://uncommondescent.com/intelligent-design/defining-design/

    Design is any process where a conscious agent outputs his conscious representation to some material object.

    It’s very simple and clear. If the form in the object derives, directly or indirectly, form subjective representations that existed before the design process takes place, that is design. Nothing else is design.

    Design can be simple or complex. When it is complex, it generates a specific property in the object, what we call complex functional information. As only a design prcess can generate that property, as far we we can observe in the whole universe, we can use that property to infer a design origin for an object, when we don’t know its origin directly.

    This is not cumbersome at all. It’s essentially Paley’s argument, in a more detailed and quantitative form.

    You ask:

    “How is design related to determinism?”

    It is not related to determinism, if not for the fact that the subjective representations precede and are in a sense a cause of the final configuration. But it is not really a classical deterministic relationship.

    Of course, the intelligent agent who designs uses his understanding of meaning, as said, to find how to implement the functions he conceives in his conscious experiences (his desire and purpose). Understanding laws is of course part of that process.

    So, we design complex machines using our scientific understanding of scientific laws, which are of course deterministic. But a watch is the result of our understanding of laws, not of the laws themselves. The key point is always the conscious subjective experience.

    As said many times, I am not a determinist: I belive in free agents, and in free will. Thereofre, the definition of determinism that you quote is not true, for me.

    You must not confound believing that many things in reality are deterministic (which is what I believe, and what science correctly assumes) with believing that all reality is merely deterministic (which is a philosophical worldview that I completely refute).

    I believe in a deterministic approach to understand the aspects of reality that are deterministic, but in no way I believe that all reality is deterministic.

    As said many times, design, whcih is of course a major part of reality in my worldview, is not deterministic, because it is strictly connected to free will.

    You ask:

    “Are you proponent of “Predestination”

    Not at all. But I believe that everything that exists in the physical plane, including us, is subject to many deterministic influences, even if we, as free agents, are never completely determined by those influences.

    You say:

    “Did I link design and determinism? Don’t think so. All I said was that design is indistinguishable from “law”.”

    But that is exactly the point with which I strongly disagree. Design is absolutely different, and distinguishable, from law.

    First of all, design requires conscious representations, by definition (see above), while laws operate without any conscious intervention, as far as we can observe.

    Second, design can generate complex functional information in objects, and laws cannot do that.

    Remember, complex functional information is the harnessing of specific contingent configurations towards a desired function. No law can do that.

    Threfore, design and law are two different things, and they can be perfectly distinguished.

    g) You say:

    “Not at all clear. My objection here was that function depends on an agent which we don’t see. And complexity also seems dependent and function hence on the agent.”

    No. Function is what the object can do. We don’t need the agent to assess function.

    ATP synthase can build ATP from a proton gardient in the cell. That is a fact. We need no agent to assess that.

    Complexity is objectibe too. We just ask ourselves: how many specific bits (in terms of necessary AA positions) are needed for ATP synthase to work as it works?

    Again, no reference to an agent is necessary to ask and answer that question.

    The point is that we know empirically that, if we observe complex functional information, we can safely infer a design origin, and thereofre a conscious agent. But that is an inference from what we objectively observe.

    You say:

    “And of course, same object can have different functions for different agents (example: the family computer that even the cat can play with).”

    Of course. But that’s not important.

    I have made many times the example that a notebook computer can certainly be used as a paperweight. Why not?

    But the point is that the paperweight function is simple, while the computer function is very complex.

    Our object can implement both functions, and probably many more: it could be used, for example, as a weapon.

    But we will not infer design for the paperweight function, or for thw weapon function, because for that function the complexity needed is very low: any solid body with a few generic restrictions will do.

    Instead, the complexity linked to the computer function (a function that our object can certainly implement) is very high: we can certainly infer design for that function that we are observing in the object.

    Now, please go back to my comment #199, and read it again. For your convenience, I post here the relevant part:

    1) Yes, my definition of FSI does use “a particular intelligent agent and a very specific function”. But it does not depend on them.

    Why? Because any oberver can define any function, and FSI for that function can be measured objectively, once the function is objectively and explicitly defined. IOWs, I can measure FSI for any explicilty defined function that the object can implement.

    So, is there an objective FSI for the object? Of course not. But there is an objective FSI fo each explicitly defined function that the object can implement.

    Now, please, consider the following point with great attention, because it is extremely important, and not so intuitive:

    If we can show even one single explicitly defined function that the object can implemet as defined, and which is complex (see next point for that), we can safely infer that the object is designed.

    Excuse me if I insist: stop a moment and consider seriously the previous statement: it is, indeed, a very strong statement.

    And absolutely true.

    One single complex function implemented by an object is enough to infer design for it.

    Another way to say it is that non designed objects cannot implement complex functions. Never.

    When I say: “Stop a moment”, I really mean it! 🙂

  307. 307
    ET says:

    Weird, the link in comment 306 didn’t work.

    Intelligent Design is NOT Anti-Evolution

    There, much better

  308. 308
    LocalMinimum says:

    Eh, lets make a graphics engine, and have procedural geometry generation that takes parameters and produces geometry data (vertices, textures, etc) and sticks it in buffers to be fed to shader programs running in the GPU.

    Now, the parameters we feed to the procedural geometry generation algorithms would be our DNA; the algorithms themselves would be the DNA translation/structure emergence process; the geometry/texture data would be the physical configuration of the biological system; the shader programs would be physical laws as they relate to biology; and the geometry displayed would be the functionality of the biological system.

    What is generally being spoken of about information requirements is what of the procedural procedural geometry generation algorithms/DNA can give rise to certain rendered geometries/biological functions (right?).

    UA’s argument about atom configuration/physical structure conflates the geometry data/physical configuration with the procedural algorithms/DNA. Thinking about it, though, this is a pretty common error. Both can be approached as information, so confusion is readily available.

  309. 309
    kairosfocus says:

    H’mm, it seems the definition of design is up again as an issue. The simplest summary I can give is: intelligently directed configuration, or if someone does not get the force of “directed,” we may amplify slightly: intelligently, intentionally directed configuration. This phenomenon is a commonplace, including the case of comments or utterances by objectors; that is, the attempted denial or dismissal instantly manifests the phenomenon. Going further, we cannot properly restrict the set of possible intelligences to ourselves or our planet or even the observed cosmos, starting with the common factor in these cases: evident or even manifest contingency of being. Bring to bear that a necessary being world-root is required to answer to why a contingent world is given that circular cause and a world from utter non-being (which hath not causal power) are both credibly absurd and we would be well advised to ponder the possibility of an intelligent, intentional, designing necessary being world-root given the fine tuning issue. The many observable and empirically well-founded signs of design manifest in the world of life (starting with alphanumeric complex coded messages in D/RNA and in associated execution machinery in the cell) joined to the fine tuning of a cosmos that supports such C-Chemistry, aqueous medium cell based life suggests a unity of purpose in the evident design of cosmos and biological life. Taken together, these considerations ground a scientific project and movement that investigates, evaluates and publishes findings regarding such signs of design. Blend in the issues of design detection and unravelling in crypography, patterns of design in computing, strategic analysis, forensics and TRIZ the theory of inventive problem solving (thus also of technological evolution) and we have a wide-ranging zone of relevance. KF

  310. 310
    gpuccio says:

    LocalMinimum:

    Good thoughts.

    In the end, the concept of contingent configurations linked to the implementation of a function is simple enough.

    Contingent configurations are those configurations that are possible according to operating laws.

    Choosing a specific contingent configuration that can implement a desired function is an act of design.

    If we can only observe the object, and not the design process, only the functional complexity, IOWs the utter improbability of the observed functional configuration, can allow a design inference.

    Simple contingent configuration can implement simple functions. But only highly specific contingent configurations can implement complex function.

    Highly specific contingent and functional configurations are always designed. There is no counter example in the whole known universe.

  311. 311
  312. 312
    gpuccio says:

    KF:

    Thank you. Very good work! 🙂

  313. 313
    ET says:

    Rumrat is over on TSZ not only equivocating but asking us to prove a negative- we need to prove that evolution cannot produce 500 bits of CSI.

    It isn’t about evolution- see comment 308, follow and read the essay linked to. And evos are saying that evolution by means of natural selection and drift (blind and mindless processes) produced the diversity of life. That means the onus is on them to demonstrate such a thing. However they are too pathetic to understand that.

  314. 314
    kairosfocus says:

    ET, search challenge delivers as close a disproof as an empirically based, inductive case gets. Searching 1 in 10^60 or worse of a config space (on generous terms) and hoping to find not one but a large number of deeply isolated needles, is not going to work. In short he demands to infer to statistical miracle in the teeth of the same general sort of statistical challenge that grounds the statistical form of the second law of thermodynamics. KF

  315. 315
    LocalMinimum says:

    gpuccio @ 311:

    Thank you. We could extend the illustration by having selectable functionality be analogized by closed volumes/unions of convex polytopes (which could also be selectable by an artist).

    In this case, more complex configurations could be stored in more ways in the geometry buffer, i.e. the more there is to draw the more ways there is to draw it (in order if nothing else)…however, each additional vertex/draw order index can be configured to produce far more degenerate geometries (inconsistent winding orders, open shapes/shapes with unenclosed volume).

    Thus, the ratio of configurations that produce clean, properly closed volumes to that which produces half-invisible junk is well below unity for each additional component, and thus the relative growth of configuration space/shrinking of functionality as terms are added.

    We could also knock this back a level of emergence, change the domain/codomain/mapping function from the geometry data(physical config)/rendered volume(function)/shaders(physical law w/r to biological ops) to procedural geometry generation parameters(DNA)/geometry data(physical config)/procedural geometry generation algorithm(chemistry/physics w/r to emergence of DNA encoded processes) and see the same, i.e. that the number of ways to encode a structure may grow, but the functional/non-functional ratio being below unity results in shrinking targets.

    I expect it’s pretty easy to see this shrinkage to be transitive given mapping by both of these functions or their properly ordered composite as a relation. Thus it’s also true, and amplified, when mapping DNA directly to biological function.

  316. 316
    Nonlin.org says:

    gpuccio@307

    Wow! How can I argue with you when you’re burying me under so many big words? 🙂

    Let me try to answer just a few of your points:
    1. Your definition of design might be simple, but we also need to identify design when we cannot observe the agent and his/her “conscious representation”.
    2. Sorry, I did not read Paley’s original argument so can’t comment directly on yours versus his. This is just a summary: “Paley tells of how if one were to find a watch in nature, one would infer a designer because of the structure, order, purpose, and design found in the watch.” I say “structure (=order) is enough” while you seem to say “purpose”.
    3. Determinism has a certain definition everyone knows. Maybe you should use a different word if you mean something else.
    4. And the main disagreement is… your claim: “Design is absolutely different, and distinguishable, from law.”

    4 a. You say: “laws operate without any conscious intervention, as far as we can observe”. What if I design and send into orbit a gizmo with a light that turns On whenever the sun is in sight (powered from solar energy)? Can you see this is a law that operates without any conscious intervention 100 and 1000 years from now?
    b. “Complex functional information” – three words poorly defined and combined to mean something to you, but nothing to me. How can I reply? And “harnessing of specific contingent configurations” doesn’t help.
    c. You: “We don’t need the agent to assess function. ATP synthase can build ATP from a proton gardient in the cell”. Yes, but that seems a mechanism, not the function. In my example above, how do I know when to turn on the light? By detecting the sun rays via some mechanism. But the function of the gizmo is likely different and only the designer knows it. And what about my older example of a nonfunctional sculpture of a watch? That’s just esthetic and certainly cannot measure time but it’s still designed.
    d. I don’t understand what you mean: “how many specific bits (in terms of necessary AA positions) are needed for ATP synthase to work as it works?”
    e. You: “Instead, the complexity linked to the computer function (a function that our object can certainly implement) is very high”. But say you discover this computer cca. 1800 so you know nothing about computers. How do you do your analysis? At that time the computer looks like a paper weight at best.
    f. You: “If we can show even one single explicitly defined function that the object can implemet as defined, and which is complex (see next point for that), we can safely infer that the object is designed.” Perhaps. But let me guess: you still don’t get any buy-in from the Darwinistas 🙂 They still say “no, what looks like a function to you is just a law of nature”, right?

    Ok. Looks like you have your method and I have mine which is much simpler… and simplicity matters as the “selfish gene” and “natural selection” soundbites show… and I account for designed art while you don’t… and you might believe the laws of nature are never changing under any circumstances, but who the heck am I to tell God: “don’t walk on water because of gravity”?

  317. 317
    ET says:

    1. Your definition of design might be simple, but we also need to identify design when we cannot observe the agent and his/her “conscious representation”.

    We do it all of the time. Did you have a point?

    “Complex functional information” – three words poorly defined and combined to mean something to you, but nothing to me.

    They are only “poorly defined” to the willfully ignorant

  318. 318
    kairosfocus says:

    NonLin: as was discussed repeatedly above and over years, it is a fairly common challenge to have to identify something as designed without direct access to the designing agent. This is routinely done by applying a type of inductive reasoning often seen in the sciences, inference to the best empirically based explanation. Here, by establishing reliable signs of design. when such are observed we are warranted to inductively infer design as cause. In this case, various forms of functionally specific, complex organisation and associated information are such signs, backed by a trillion member observation base and the associated blind search challenge in configuration spaces. Kindly see the onward thread here: https://uncommondescent.com/intelligent-design/what-is-design-and-why-is-it-relevant/ To overturn such inference, one would need an observed counter example of FSCO/I beyond relevant thresholds observed to originate by blind chance and/or mechanical necessity. On the trillion member observation base, that has not been done. All of this accords with Newton’s vera causa principle that explanations should be based on causes seen to be adequate to cause effects. Yes, actually observed. The so-called methodological naturalism principle unjustifiably sets this aside and ends up begging the question. KF

    PS: The common objection that cell based life reproduces does not apply to the root of the tree of life, origin of the von Neumann, coded information using kinematic self-replicator is antecedent to reproduction and is a case of FSCO/I.

    PPS: As a concrete example, notice how functional text is based on particular components arranged in a specific, meaningful order. Likewise, how parts are arranged in any number of systems, including biological as well as technological ones. Disordering that arrangement beyond a narrow tolerance often disrupts function. This is the island of function phenomenon. Such is anything but meaningless.

  319. 319
    Origenes says:

    Nonlin @

    But let me guess: you still don’t get any buy-in from the Darwinistas 🙂 They still say “no, what looks like a function to you is just a law of nature”, right?

    Uh, no. At this forum we have seen a lot of crazy and confused arguments from “Darwinistas”, but never this one — probably because biological functions do not resemble laws of nature at all.

  320. 320
    LocalMinimum says:

    I need to add @ 316 that the procedural geometry algorithms would be primitive and primitive strip building and welding and such. You could of course make procedural generation that only produced convex polytopes; but then you’d be implying that every polymer or mass of tissue that could be encoded by DNA was selectively positive.

  321. 321
    gpuccio says:

    Nonlin.org at #317:

    Wow! How can I argue with you when you’re burying me under so many big words?

    Maybe it’s you who inspire me to write so much! 🙂

    Look, I really like your creativity of thought and you are a very honest discussant. And I agree with many things that you say, but I also strongly disagree with others.

    So, it’s not that I like to contradict you, but when your creative thoughts begin to deny the essence of ID theory, which I deeply believe to be true, I feel that I have to provide my counter-arguments. In the end, I am happy that you keep your ideas, and I will keep mine.

    So, just to clarify what could still be not completely clear:

    1. Of course. But one thing is the definition of design, another thing is the inference of design from an observed object.

    I use consciousness to define design. And I infer design from objective properties of the observed object. Again, you seem to conflate two different concepts.

    2. I say that both order and function can be valid specifications. However, in the case of order we must be extremely careful that order is not simply the result of law (like in the case of an unfair coin which gives a series of heads). Function, instead, when implemented by a specific contingent configuration, has no such limitations.

    Moreover, in biology it’s definitely function that we use to infer design, and not order.

    In the case of the watch, as explained, order of the parts and the function of measuring time can bot