Uncommon Descent Serving The Intelligent Design Community

Defending Intelligent Design theory: Why targets are real targets, probabilities real probabilities, and the Texas Sharp Shooter fallacy does not apply at all.

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

 

 

 

The aim of this OP is to discuss in some order and with some completeness a few related objections to ID theory which are in a way connected to the argument that goes under the name of Texas Sharp Shooter Fallacy, sometimes used as a criticism of ID.

The argument that the TSS fallacy is a valid objection against ID has been many times presented by DNA_Jock, a very good discussant from the other side. So, I will refer in some detail to his arguments, as I understand them and remember them. Of course, if DNA_Jock thinks that I am misrepresenting his ideas, I am ready to ackowledge any correction about that. He can post here, if he can or likes, or at TSZ, where he is a contributor.

However, I thik that the issues discussed in this OP are of general interest, and that they touch some fundamental aspects of the debate.

As an help to those who read this, I will sum up the general structure of this OP, which will probably be rather long. I will discuss three different arguments, somewhat related. They are:

a) The application of the Texas Sharp Shooter fallacy to ID, and why that application is completely wrong.

b) The objection of the different possible levels of function definition.

c) The objection of the possible alternative solutions, and of the incomplete exploration of the search space.

Of course, the issue debated here is, as usual, the design inference, and in particular its application to biological objects.

So, let’s go.

a) The Texas Sharp Shooter fallacy and its wrong application to ID.

 

What’s the Texas Sharp Shooter fallacy (TSS)?

It is a logical fallacy. I quote here a brief description of the basic metaphor, from RationalWiki:

The fallacy’s name comes from a parable in which a Texan fires his gun at the side of a barn, paints a bullseye around the bullet hole, and claims to be a sharpshooter. Though the shot may have been totally random, he makes it appear as though he has performed a highly non-random act. In normal target practice, the bullseye defines a region of significance, and there’s a low probability of hitting it by firing in a random direction. However, when the region of significance is determined after the event has occurred, any outcome at all can be made to appear spectacularly improbable.

For our purposes, we will use a scenario where specific targets are apparently shot by a shooter. This is the scenario that best resembles what we see in biological objects, where we can observe a great number of functional structures, in particular proteins, and we try to understand the causes of their origin.

In ID, as well known, we use functional information as a measure of the improbability of an outcome.  The general idea is similar to Paley’s argument for a watch: a very high level of specific functional information in an object is a very reliable marker of design.

But to evaluate functional information in any object, we must first define a function, because the measure of functional information depends on the function defined. And the observer must be free to define any possible function, and then measure the linked functional information. Respecting these premises, the idea is that if we observe any object that exhibits complex functional information (for example, more than 500 bits of functional information ) for an explicitly defined function (whatever it is) we can safely infer design.

Now, the objection that we are discussing here is that, according to some people (for example DNA_Jock), by defining the function after we have observed the object as we do in ID theory we are committing the TSS fallacy. I will show why that is not the case using an example, because examples are clearer than abstract words.

So, in our example, we have a shooter, a wall which is the target of the shooting, and the shootin itself. And we are the observers.

We know nothing of the shooter. But we know that a shooting takes place.

Our problem is:

  1. Is the shooting a random shooting? This is the null hypothesis

or:

  1. Is the shooter aiming at something? This is the “aiming” hypothesis

So, here I will use “aiming” instead of design, because my neo-darwinist readers will probably stay more relaxed. But, of course, aiming is a form of design (a conscious representation outputted to a material system).

Now I will describe three different scenarios, and I will deal in detail with the third.

  1. First scenario: no fallacy.

In this case, we can look at the wall before the shooting. We see that there are 100 targets painted in different parts of the wall, rather randomly, with their beautiful colors (let’s say red and white). By the way, the wall is very big, so the targets are really a small part of the whole wall, even if taken together.

Then, we witness the shootin: 100 shots.

We go again to the wall, and we find that all 100 shots have hit the targets, one per target, and just at the center.

Without any worries, we infer aiming.

I will not compute the probabilities here, because we are not really interested in this scenario.

This is a good example of pre-definition of the function (the targets to be hit). I believe that neither DNA_Jock nor any other discussant will have problems here. This is not a TSS fallacy.

  1. Second scenario: the fallacy.

The same setting as above. However, we cannot look at the wall before the shooting. No pre-specification.

After the shooting, we go to the wall and paint a target around each of the different shots, for a total of 100. Then we infer aiming.

Of course, this is exactly the TSS fallacy.

There is a post-hoc definition of the function. Moreover, the function is obviously built (painted) to correspond to the information in the shots (their location). More on this later.

Again, I will not deal in detail with this scenario because I suppose that we all agree: this is an example of TSS fallacy, and the aiming inference is wrong.

  1. Third scenario: no fallacy.

The same setting as above. Again, we cannot look at the wall before the shooting. No pre-specification.

After the shooting, we go to the wall. This time, however, we don’t paint anything.

But we observe that the wall is made of bricks, small bricks. Almost all the bricks are brown. But there are a few that are green. Just a few. And they are randomly distributed in the wall.

 

 

We also observe that all the 100 shots have hit green bricks. No brown brick has been hit.

Then we infer aiming.

Of course, the inference is correct. No TSS fallacy here.

And yet, we are using a post-hoc definition of function: shooting the green bricks.

What’s the difference with the second scenario?

The difference is that the existence of the green bricks is not something we “paint”: it is an objective property of the wall. And, even if we do use something that we observe post-hoc (the fact that only the green bricks have been shot) to recognize the function post-hoc, we are not using in any way the information about the specific location of each shot to define the function. The function is defined objectively and independently from the contingent information about the shots.

IOWs, we are not saying: well the shooter was probably aiming at poin x1 (coordinates of the first shot) and point x2 (coordinates of the second shot), and so on. We just recognize that the shooter was aimin at the green bricks.  An objective property of the wall.

IOWs ( I use many IOWs, because I know that this simple concept will meet a great resistance in the minds of our neo-darwinist friends) we are not “painting” the function, we are simply “recognizing” it, and using that recognition to define it.

Well, this third scenario is a good model of the design inference in ID. It corresponds very well to what we do in ID when we make a design inference for functional proteins. Therefore, the procedure we use in ID is no TSS fallacy. Not at all.

Given the importance of this model for our discussion, I will try to make it more quantitative.

Let’s say that the wall is made of 10,000 bricks in total.

Let’s say that there are only 100 green bricks, randomly distributed in the wall.

Let’s say that all the green bricks have been hit, and no brown brick.

What are the probabilities of that result if the null hypothesis is true (IOWs, if the shooter was not aiming at anything) ?

The probability of one succesful hit (where success means hitting a green brick) is of course 0.01 (100/10000).

The probability of having 100 successes in 100 shots can be computed using the binomial distribution. It is:

10e-200

IOWs, the system exhibits 664 bits of functional information. More ore less like the TRIM62 protein, an E3 ligase discussed in my previous OP about the Ubiquitin system, which exhibits an increase of 681 bits of human conserved functional information at the transition to vertebrates.

Now, let’s stop for a moment for a very important step. I am asking all neo-darwinists who are reading this OP a very simple question:

In the above situation, do you infer aiming?

It’s very important, so I will ask it a second time, a little louder:

In the above situation, do you infer aiming? 

Because if your answer is no, if you still think that the above scenario is a case of TSS fallacy, if you still believe that the observed result is not unlikely, that it is perfectly reasonable under the assumption of a random shooting, then you can stop here: you can stop reading this OP, you can stop discussing ID, at least with me. I will go on with the discussion with the reasonable people who are left.

So, in the end of this section, let’s remind once more the truth about post-hoc definitions:

  1. No post-hoc definition of the function that “paints” the function using the information from the specific details of what is observed is correct. Those definitions are clear examples of TSS fallacy.
  2. On the contrary, any post-hoc definition that simply recognizes a function which is related to an objectively existing property of the system, and makes no special use of the specific details of what is observed to “paint” the function, is perfectly correct. It is not a case of TSS fallacy.

 

b) The objection of the different possible levels of function definition.

DNA_Jock summed up this specific objection in the course of a long discussion in the thread about the English language:

Well, I have yet to see an IDist come up with a post-specification that wasn’t a fallacy. Let’s just say that you have to be really, really, really cautious if you are applying a post-facto specification to an event that you have already observed, and then trying to calculate how unlikely that specific event was. You can make the probability arbitrarily small by making the specification arbitrarily precise.

OK, I have just discussed why post-specifications are not in themselves a fallacy. Let’s say that DNA_Jock apparently admits it, because he just says that we have to be very cautious in applying them. I agree with that, and I have explained what the caution should be about.

Of course, I don’t agree that ID’s post-hoc specifications are a fallacy. They are not, not at all.

And I absolutely don’t agree with his argument that one of the reasosn why ID’s post-hoc specifications are a fallacy would be that “You can make the probability arbitrarily small by making the specification arbitrarily precise.”

Let’s try to understand why.

So, let’s go back to our example 3), the wall with the green bricks and the aiming inference.

Let’s make our shooter a little less precise: let’s say that, out of 100 shots, only 50 hits are green bricks.

Now, the math becomes:

The probability of one succesful hit (where success means hitting a green brick) is still 0.01 (100/10000).

The probability of having 50 successes or more in 100 shots can be computed using the binomial distribution. It is:

6.165016e-72

Now, the system exhibits “only” 236 bits of functional information. Much less than in the previous example, but still more than enough, IMO, to infer aiming.

Consider that five sigma, which is ofetn used as a standard in physics to reject the nulll hypothesis , is just 3×10-7,  less than 22 bits.

Now, DNA_Jock’s objection would be that our post-hoc specification is not valid because “we can make the probability arbitrarily small by making the specification arbitrarily precise”.

But is that true? Of course not.

Let’s say that, in this case, we try to “make the specification arbitrarily more precise”, defining the function of sharp aiming as “hitting only green bricks with all 100 shots”.

Well, we are definitely “making the probability arbitrarily small by making the specification arbitrarily precise”. Indeed, we are making the specification more precise for about 128 orders of magnitude! How smart we are, aren’t we?

But if we do that, what happens?

A very simple thing: the facts that we are observing do not meet the specification anymore!

Because, of  course, the shooter hit only 50 green bricks out of 100. He is smart, but not that smart.

Neither are we smart if we do such a foolish thing, defining a function that is not met by observed facts!

The simple truth is: we cannot at all “make the probability arbitrarily small by making the specification arbitrarily precise”, as DNA_Jock argues, in our post-hoc specification, because otherwise our facts would not meet our specification anymore, and that would be completely useless and irrelevant..

What we can and must do is exactly what is always done in all cases where hypothesis testing is applied in science (and believe me, that happens very often).

We compute the probabilities of observing the effect that we are indeed observing, or a higher one, if we assume the null hypothesis.

That’s why I have said that the probability of “having 50 successes or more in 100 shots” is 6.165016e-72.

This is called a tail probability, in particular the probability of the upper tail. And it’s exactly what is done in science, in most scenarios.

Therefore, DNA_Jock’s argument is completely wrong.

c) The objection of the possible alternative solutions, and of the incomplete exploration of the search space.

c1) The premise

This is certainly the most complex point, because it depends critically on our understanding of protein functional space, which is far from complete.

For the discussion to be in some way complete, I have to present first a very general premise. Neo-darwinists, or at least the best of them, when they understand that they have nothing better to say,  usually desperately recur to a set of arguments related to the functional space of proteins. The reason is simple enough: as the nature and structure of that space is still not well known or understood, it’s easier to equivocate with false reasonings.

Their purpose, in the end, is always to suggest that functional sequences can be much more frequent than we believe. Or at least, that they are much more frequent than IDists believe. Because, if functional sequences are frequent, it’s certainly easier for RV to find them.

The arguments for this imaginary frequency of biological function are essentially of five kinds:

  1. The definition of biological function.
  2. The idea that there are a lot of functional islands.
  3. The idea that functional islands are big.
  4. The idea that functional islands are connected. The extreme form of this argument is that functional islands simply don’t exist.
  5. The idea that the proteins we are observing are only optimized forms that derive from simpler implementations through some naturally selectable ladder of simple steps.

Of course, different mixtures of the above arguments are also frequently used.

OK. let’s get rid of the first, which is rather easy. Of course, if we define extremely simple biological functions, they will be relatively frequent.

For example, the famous Szostak experiment shows that  a weak affinity for ATP is relatively common in a random library; about 1 in 1011 sequences 80 AAs long.

A weak affinity for ATP is certainly a valid definition for a biological function. But it is a function which is at the same time irrelevant and non naturally selectable. Only naturally selectable functions have any interest for the neo-darwinian theory.

Moreover, most biological functions that we observe in proteins are extremely complex. A lot of them have a functional complexity beyond 500 bits.

So, we are only interested in functions in the protein space which are naturally selectable, and we are specially interested in functions that are complex, because those are the ones about which we make a design inference.

The other three points are subtler.

  1. The idea that there are a lot of functional islands.

Of course, we don’t know exactly how many functional islands exist in the protein space, even restricting the concept of function to what was said above. Neo-darwinists hope that there are a lot of them. I think there are many, but not so many.

But the problem, again, is drastically redimensioned if we consider that not all functional islands will do. Going back to point 1, we need naturally selectable islands. And what can be naturally selected is much less than what can potentially be functional. A naturally selectable island of function must be able to give a reproductive advantage. In a system that already has some high complexity, like any living cell, the number of functions that can be immediately integrated in what already exists, is certainly strongly constrained.

This point is also stricly connected to the other two points, so I will go on with them and then try some synthesis.

  1. The idea that functional islands are big.

Of course, functional islands can be of very different sizes. That depends on how many sequences, related at sequence level (IOWs, that are part of the same island), can implement the function.

Measuring functional information in a sequence by conservation, like in the Dustron method or in my procedure many times described, is an indirect way of measuring the size of a functional island. The greater is the functional complexity of an island, the smaller is its size in the search space.

Now, we must remember a few things. Let’s take as an example an extremely conserved but not too long sequence, our friend ubiquitin. It’s 76 AAs long. Therefore, the associated search space is 20^76: 328 bits.

Of course, even the ubiquitin sequence can tolerate some variation, but it is still one of the most conserved sequences in evolutionary history. Let’s say, for simplicity, that at least 70 AAs are stictly conserved, and that 6 can vary freely (of course, that’s not exact, just an approximation for the sake of our discussion).

Therefore, using the absolute information potential of 4.3 bits per aminoacid, we have:

Functional information in the sequence = 303 bits

Size of the functional island = 328 – 303 = 25 bits

Now, a functional island of 25 bits is not exactly small: it corresponds to about 33.5 million sequences.

But it is infinitely tiny if compared to the search space of 328 bits:  7.5 x 10^98 sequences!

If the sequence is longer, the relationship between island space and search space (the ocean where the island is placed) becomes much worse.

The beta chain of ATP synthase (529 AAs), another old friend, exhibits 334 identities between e. coli and humans. Always for the sake of simplicity, let’s consider that about 300 AAs are strictly conserved, and let’s ignore the functional contraint on all the other AA sites. That gives us:

Search space = 20^529 = 2286 bits

Functional information in the sequence = 1297 bits

Size of the functional island =  2286 – 1297 = 989 bits

So, with this computation, there could be about 10^297 sequences that can implement the function of the beta chain of ATP synthase. That seems a huge number (indeed, it’s definitley an overestimate, but I always try to be generous, especially when discussing a very general principle). However, now the functional island is 10^390 times smaller than the ocean, while in the case of ubiquitin it was “just”  10^91 times smaller.

IOWs, the search space (the ocean) increases exponentially much more quickly than the target space (the functional island) as the lenght of the functional sequence increases, provided of course that the sequences always retain high functional information.

The important point is not the absolute size of the island, but its rate to the vastness of the ocean.

So, the beta chain of ATP synthase is really a tiny, tiny island, much smaller than ubiquitin.

Now, what would be a big island? It’s simple: a functional isalnd which can implement the same function at the same level, but with low functional information. The lower the functional information, the bigger the island.

Are there big islands? For simple functions, certainly yes. Behe quotes the antifreeze protein as an example example. It has rather low FI.

But are there big islands for complex functions, like that of ATP synthase beta chain? It’s absolutely reasonable to believe that there are none. Because the function here is very complex, and it cannot be implemented by a simple sequence, exactly like a functional spreadsheet software annot be written by a few bits of source code. Neo-darwinists will say that we don’t know that for certain. It’s true, we don’t know it for certain. We know it almost for certain.

The simple fact remains: the only example of the beta chain of the F1 complex of ATP synthase that we know of is extremely complex.

Let’s go, for the moment, to the 4th argument.

  1. The idea that functional islands are connected. The extreme form of this argument is that functional islands simply don’t exist.

This is easier. We have a lot of evidence that functional islands are not connected, and that they are indeed islands, widely isolated in the search space of possible sequences. I will mention the two best evidences:

4a) All the functional proteins that we know of, those that exist in all the proteomse we have examined, are grouped in abot 2000 superfamilies. By definition, a protein superfamily is a cluster of sequences that have:

  • no sequence similarity
  • no structure similarity
  • no function similarity

with all the other groups.

IOWs, islands in the sequence space.

4b) The best (and probably the only) good paper that relates an experiment where Natural Selection is really tested by an approrpiaite simulation is the rugged landscape paper:

Experimental Rugged Fitness Landscape in Protein Sequence Space

http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0000096

Here, NS is correctly simulated in a phage system, because what is measured is infectivity, which in phages is of course strictly related to fitness.

The function studied is the retrieval of a partially damaged infectivity due to a partial random substitution in a protein linked to infectivity.

In brief, the results show a rugged landscape of protein function, where random variation and NS can rather easily find some low-level peaks of function, while the original wild-type, optimal peak of function cannot realistically be found, not only in the lab simulation, but in any realistic natural setting. I quote from the conclusions:

The question remains regarding how large a population is required to reach the fitness of the wild-type phage. The relative fitness of the wild-type phage, or rather the native D2 domain, is almost equivalent to the global peak of the fitness landscape. By extrapolation, we estimated that adaptive walking requires a library size of 1070 with 35 substitutions to reach comparable fitness.

I would recommend to have a look at Fig. 5 in the paper to have an idea of what a rugged landscape is.

However, I will happily accept a suggestion from DNA_Jock, made in one of his recent comments at TSZ about my Ubiquitin thread, and with which I fully agree. I quote him:

To understand exploration one, we have to rely on in vitro evolution experiments such as Hayashi et al 2006 and Keefe & Szostak, 2001. The former also demonstrates that explorations one and two are quite different. Gpuccio is aware of this: in fact it was he who provided me with the link to Hayashi – see here.
You may have heard of hill-climbing algorithms. Personally, I prefer my landscapes inverted, for the simple reason that, absent a barrier, a population will inexorably roll downhill to greater fitness. So when you ask:

How did it get into this optimized condition which shows a highly specified AA sequence?

I reply
It fell there. And now it is stuck in a crevice that tells you nothing about the surface whence it came. Your design inference is unsupported.

Of course, I don’t agree with the last phrase. But I fully agree that we should think of local optima as “holes”, and not as “peaks”. That is the correct way.

So, the protein landscape is more like a ball and holes game, but without a guiding labyrinth: as long as the ball in on the flat plane (non functional sequences), it can go in any direction, freely. However, when it falls into a hole, it will quickly go to the bottom, and most likely it will remain there.

 

 

But:

  • The holes are rare, and they are of different sizes
  • They are distant from one another
  • A same function can be implemented by different, distant holes, of different size

What does the rugged landscape paper tell us?

  • That the wildtype function that we observe in nature is an extremely small hole. To find it by RV and NS, according to the authors, we should start with a library of 10^70 sequences.
  • That there are other bigger holes which can partially implement some function retrieval, and that are in the range of reasonable RV + NS
  • That those simpler solutions are not bridges to the optimal solution observed in the wildtype. IOWs. they are different, and there is no “ladder” that NS can use to reach the optimal solution .

Indeed, falling into a bigger hole (a much bigger hole, indeed) is rather a severe obstacle to finding the tiny hole of the wildtype. Finding it is already almost impossible because it is so tiny, but it becomes even more impossible if the ball falls into a big hole, because it will be trapped there by NS.

Therefore, to sum up, both the existence of 2000 isolated protein superfamilies and the evidence from the rugged landscape paper demonstrate that functional islands exist, and that they are isolated in the sequence space.

Let’s go now to the 5th argument:

  1. The idea that the proteins we are observing are only optimized forms that derive from simpler implementations by a naturally selectable ladder.

This is derived from the previous argument. If bigger functional holes do exist for a function (IOWs, simpler implementations), and they are definitely easier to find than the optimal solution we observe, why not believe that the simpler solutions were found first, and then opened the way to the optimal solution by a process of gradual optimization and natural selection of the steps? IOWs, a naturally selectable ladder?

And the answer is: because that is impossible, and all the evidence we have is against that idea.

First of all, even if we know that simpler implementations do exist in some cases (see the rugged landscape paper), it is not at all obvious that they exist as a general rule.

Indeed, the rugged landscape experiment is a very special case, because it is about retrieval of a function that has been only partially impaired by substituting a random sequence to part of an already existing, functional protein.

The reason for that is that, if they had completely knocked out the protein, infectivity, and therefore survival itself, would not have survived, and NS could not have acted at all.

In function retrieval cases, where the function is however kept even if at a reduced level, the role of NS is greatly helped: the function is already there, and can be optimed with a few naturally selectable steps.

And that is what happens in the case of the Hayashi paper. But the function is retrieved only very partially, and, as the authors say, there is no reasonable way to find the wildtype sequence, the optimal sequence, in that way. Because the optimal sequence would require, according to the authors, 35 AA substitutions, and a starting library of 10^70 random sequences.

What is equally important is that the holes found in the experiment are not connected to the optimal solution (the wildtype). They are different from it at sequence level.

IOWs, this bigger holes do not lead to the optimal solution. Not at all.

So, we have a strange situation: 2000 protein superfamilies, and thousand and tousands of proteins in them, that appear to be, in most cases, extremely functional, probably absolutely optimal. But we have absolutely no evidence that they have been “optimized”. They are optimal, but not necessarily optimized.

Now, I am not excluding that some optimization can take place in non design systems: we have good examples of that in the few known microevolutionary cases. But that optimization is always extremely short, just a few AAs substitutions once the starting functional island has been found, and the function must already be there.

So, let’s say that if the extremely tiny functional island where our optimal solution lies, for example the wildtype island in the rugged landscape experiment, can be found in some way, then some small optimization inside that functional island could certainly take place.

But first, we have to find that island: and for that we need 35 specific AA substitutions (about 180 bits), and 10^70 starting sequences, if we go by RV + NS. Practically impossible.

But there is more. Do those simpler solutions always exist? I will argue that it is not so in the general case.

For example, in the case of the alpha and beta chains of the F1 subunit of ATP synthase, there is no evidence at all that simpler solutions exist. More on that later.

So, to sum it up:

The ocean of the search space, according to the reasonings of neo-darwinists, should be overflowing with potential naturally selectable functions. This is not true, but let’s assume for a moment, for the sake of discussion, that it is.

But, as we have seen, simpler functions or solutions, when they exist, are much bigger functional islands than the extremely tiny functional islands corresponding to solutions with high functional complexity.

And yet, we have seen that there is absolutely no evidence that simpler solutuion, when they exist, are bridges, or ladder, to highly complex solutions. Indeed, there is good evidence of the contrary.

Given those premises, what would you expect if the neo-darwinian scenario were true? It’s rather simple: an universal proteome overflowing with simple functional solutions.

Instead, what do we observe? It’s rather simple: an universal proteome overflowing with highly functional, probably optimal, solutions.

IOWs, we find in the existing proteome almost exclusively highly complex solutions, and not simple solutions.

The obvious conclusion? The neo-darwinist scenario is false. The highly functional, optimal solutions that we observe can only be the result of intentional and intelligent design.

c2) DNA_Jock’s arguments

Now I will take in more detail DNA_Jock’ s two arguments about alternative solutions and the partial exploration of the protein space, and will explain why they are only variants of what I have already discussed, and therefore not valid.

The first argument, that we can call “the existence of alternative solutions”, can be traced to this statement by DNA_Jock:

Every time an IDist comes along and claims that THIS protein, with THIS degree of constraint, is the ONLY way to achieve [function of interest], subsequent events prove them wrong. OMagain enjoys laughing about “the” bacterial flagellum; John Walker and Praveen Nina laugh about “the” ATPase; Anthony Keefe and Jack Szostak laugh about ATP-binding; now Corneel and I are laughing about ubiquitin ligase: multiple ligases can ubiquinate a given target, therefore the IDist assumption is false. The different ligases that share targets ARE “other peaks”.
This is Texas Sharp Shooter.

We will debate the laugh later. For the moment, let’s see what the argument states.

It says: the solution we are observing is not the only one. There can be others, in some cases we know there are others. Therefore, your computation of probabilities, and therefore of functional inpormation, is wrong.

Another way to put it is to ask the question: “how many needles are there in the haystack?”

Alan Fox seems to prefer this metaphor:

This is what is wrong with “Islands-of-function” arguments. We don’t know how many needles are in the haystack. G Puccio doesn’t know how many needles are in the haystack. Evolution doesn’t need to search exhaustively, just stumble on a useful needle.

They both seem to agree about the “stumbling”. DNA_Jock says:

So when you ask:

How did it get into this optimized condition which shows a highly specified AA sequence?

I reply
It fell there. And now it is stuck in a crevice that tells you nothing about the surface whence it came.

OK, I think the idea is clear enough. It is essentially the same idea as in point 2 of my general premise. There are many functional islands. In particular, in this form, many functional islands for the same function.

I will answer it in two parts:

  • Is it true that the existence of alternative solutions, if they exist, makes the computation of functional complexity wrong?
  • Have we really evidence that alternative solutions exist, and of how frequent they can really be?

I will discuss the first part here, and say something about the second part later in the OP.

Let’s read again the essence of the argument, as summed up by me above:

” The solution we are observing is not the only one. There can be others, in some cases we know there are others. Therefore, your computation of probabilities, and therefore of functional information, is wrong.”

As it happens with smart arguments (and DNA_Jock is usually smart), it contains some truth, but is essentially wrong.

The truth could be stated as follows:

” The solution we are observing is not the only one. There can be others, in some cases we know there are others. Therefore, our computation of probabilities, and therefore of functional information, is not completely precise, but it is essentially correct”.

To see why that is the case, let’s use again a very good metaphor: Paley’s old watch. That will help to clarify my argument, and then I will discuss how it relies to proteins, in particular.

So, we have a watch. Whose function is to measure time. And, in general, let’s assume that we infer design for the watch, because its functional information is high enough to exclude that it could appear in any non design system spontaneously. I am confident that all reasonable people will agree with that. Anyway, we are assuming it for the present discussion.

 

 

Now, after having made a design inference (a perfectly correct inference, I would say) for this object, we have a sudden doubt. We ask ourselves: what if DNA_Jock is right?

So, we wonder: are there other solutions to measure time? Are there other functional islands in the search space of material objects?

Of course there are.

I will just mention four clear examples: a sundial, an hourglass, a digital clock,  an atomic clock.

The sundial uses the position of the sun. The hourglass uses a trickle of sand. The digital clock uses an electronic oscillator that is regulated by a quartz crystal to keep time. An atomic clock uses an electron transition frequency in the microwave, optical, or ultraviolet region.

None of them uses gears or springs.

Now, two important points:

  • Even if the functional complexity of the five above mentioned solutions is probably rather different (the sundial and the hourglass are probably quite simpler, and the atomic clock is probably the most complex), they are all rather complex. None of them would be easily explained without a design inference. IOWs, they are small functional islands, each of them. Some are bigger, some are really tiny, but none of them is big enough to allow a random origin in a non design system.
  • None of the four additional solutions mentioned would be, in any way, a starting point to get to the traditional watch by small functional modifications. Why? Because they are completely different solutions, based on different ideas and plans.

If someone believes differently, he can try to explain in some detail how we can get to a traditional watch starting from an hourglass.

 

 

Now, an important question:

Does the existence of the four mentioned alternative solutions, or maybe of other possible similar solutions, make the design inference for the traditional watch less correct?

The answer, of course, is no.

But why?

It’s simple. Let’s say, just for the sake of discussion, that the traditional watch has a functional complexity of 600 bits. There are at least 4 additional solutions. Let’s say that each of them has, again, a functional complexity of 500 bits.

How much does that change the probability of getting the watch?

The answer is: 2 bits (because we have 4 solutions instead of one). So, now the probability is 598 bits.

But, of course, there can be many more solutions. Let’s say 1000. Now the probability would be about 590 bits. Let’s say one million different complex solutions (this is becoming generous, I would say). 580 bits. One billion? 570 bits.

Shall I go on?

When the search space is really huge, the number of really complex solutions is empirically irrelevant to the design inference. One observed complex solution is more than enough to infer design. Correctly.

We could call this argument: “How many needles do you need to tranfsorm a haystack into a needlestack?” And the answer is: really a lot of them.

Our poor 4 alternative solutions will not do the trick.

But what if there are a number of functional islands that are much bigger, much more likely? Let’s say 50 bits functional islands. Much simpler solutions. Let’s say 4 of them. That would make the scenario more credible. Not so much, probably, but certainly it would work better than the 4 complex solutions.

OK, I have already discussed that above, but let’s say it again. Let’s say that you have 4 (or more) 50 bits solution, and one (or more) 500 bits solution. But what you observe as a fact is the 500 bits solution, and none of the 50 bits solutions. Is that credible?

No, it isn’t. Do you know how smaller a 500 bits solution is if compared to a 50 bits solution? It’s 2^450 times smaller: 10^135 times smaller. We are dealing with exponential values here.

So, if much simpler solutions existed, we would expect to observe one of them, and not certainly a solution that is 10^135 times more unlikely. The design inference for the highly complex solution is not disturbed in any way by the existence of much simpler solutions.

OK, I think that the idea is clear enough.

c3) The laughs

As already mentioned, the issue of alternative solutions and uncounted needles seems to be a special source of hilarity for DNA_Jock.  Good for him (a laugh is always a good thing for physical and mental health). But are the laughs justified?

I quote here again his comment about the laughs, that I will use to analyze the issues.

Every time an IDist comes along and claims that THIS protein, with THIS degree of constraint, is the ONLY way to achieve [function of interest], subsequent events prove them wrong. OMagain enjoys laughing about “the” bacterial flagellum; John Walker and Praveen Nina laugh about “the” ATPase; Anthony Keefe and Jack Szostak laugh about ATP-binding; now Corneel and I are laughing about ubiquitin ligase: multiple ligases can ubiquinate a given target, therefore the IDist assumption is false. The different ligases that share targets ARE “other peaks”.

I will not consider the bacterial flagellum, that has no direct relevance to the discussion here. I will analyze, instead, the other three laughable issues:

  • Szostak and Keefe’s ATP binding protein
  • ATP synthase (rather than ATPase)
  • E3 ligases

Szostak and Keefe should not laugh at all, if they ever did. I have already discussed their paper a lot of times. It’s a paper about directed evolution which generates a strongly ATP binding protein form a weakly ATP binding protein present in a random library. It is directed evolution by mutation and artificial selection. The important point is that both the original weakly binding protein and the final strongly binding protein are not naturally selectable.

Indeed, a protein that just binds ATP is of course of no utility in a cellular context. Evidence of this obvious fact can be found here:

A Man-Made ATP-Binding Protein Evolved Independent of Nature Causes Abnormal Growth in Bacterial Cells

http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0007385

There is nothing to laugh about here: the protein is a designed protein, and anyway it is no functional peak/hole at all in the sequence space, because it cannot be naturally selected.

Let’s go to ATP synthase.

DNA_Jock had already remarked:

They make a second error (as Entropy noted) when they fail to consider non-traditional ATPases (Nina et al).

And he gives the following link:

Highly Divergent Mitochondrial ATP Synthase Complexes in Tetrahymena thermophila

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2903591/

And, of course, he laughs with Nina (supposedly).

OK. I have already discussed that the existence of one or more highly functional, but different, solutions to ATP building would not change the ID inference at all. But is it really true that there are these other solutions?

Yes and no.

As far as my personal argument is concerned, the answer is definitely no (or at least, there is no evidence of them). Why?

Because my argument, repeated for years, has always been based (everyone can check) on the alpha and beta chains of ATP synthase, the main constituents of the F1 subunit, where the true catalytic function is implemented.

To be clear, ATP synthase is a very complex molecule, made of many different chains and of two main multiprotein subunits. I have always discussed only the alpha and beta chains, because those are the chains that are really highly conserved, from prokaryotes to humans.

The other chains are rather conserved too, but much less. So, I have never used them for my argument. I have never presented blast values regarding the other chains, or made any inference about them. This can be checked by everyone.

Now, the Nina paper is about a different solution for ATP synthase that can be found in some single celled eukaryotes,

I quote here the first part of the abstract:

The F-type ATP synthase complex is a rotary nano-motor driven by proton motive force to synthesize ATP. Its F1 sector catalyzes ATP synthesis, whereas the Fo sector conducts the protons and provides a stator for the rotary action of the complex. Components of both F1 and Fo sectors are highly conserved across prokaryotes and eukaryotes. Therefore, it was a surprise that genes encoding the a and b subunits as well as other components of the Fo sector were undetectable in the sequenced genomes of a variety of apicomplexan parasites. While the parasitic existence of these organisms could explain the apparent incomplete nature of ATP synthase in Apicomplexa, genes for these essential components were absent even in Tetrahymena thermophila, a free-living ciliate belonging to a sister clade of Apicomplexa, which demonstrates robust oxidative phosphorylation. This observation raises the possibility that the entire clade of Alveolata may have invented novel means to operate ATP synthase complexes.

Emphasis mine.

As everyone can see, it is absolutely true that these protists have a different, alternative form of ATP symthase: it is based on a similar, but certainly divergent, architecture, and it uses some completely different chains. Which is certainly very interesting.

But this difference does not involve the sequence of the alpha and beta chains in the F1 subunit.

Beware, the a and b subunits mentioned above by the paper are not the alpha and beta chains.

From the paper:

The results revealed that Spot 1, and to a lesser extent, spot 3 contained conventional ATP synthase subunits including α, β, γ, OSCP, and c (ATP9)

IOWs, the “different” ATP synthase uses the same “conventional” forms of alpha and beta chain.

To be sure of that, I have, as usual, blasted them against the human forms. Here are the results:

ATP synthase subunit alpha, Tetrahymena thermophila, (546 AAs) Uniprot Q24HY8, vs  ATP synthase subunit alpha, Homo sapiens, 553 AAs (P25705)

Bitscore: 558 bits     Identities: 285    Positives: 371

ATP synthase subunit beta, Tetrahymena thermophila, (497 AAs) Uniprot I7LZV1, vs  ATP synthase subunit beta, Homo sapiens, 529 AAs (P06576)

Bitscore: 729 bits     Identities: 357     Positives: 408

These are the same, old, conventional sequences that we find in all organisms, the only sequences that I have ever used for my argument.

Therefore, for these two fundamental sequences, we have no evidence at all of any alternative peaks/holes. Which, if they existed, would however be irrelevant, as already discussed.

Not much to laugh about.

Finally, E3 ligases. DNA_Jock is ready to laugh about them because of this very good paper:

Systematic approaches to identify E3 ligase substrates

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5103871/

His idea, shared with other TSZ guys, is that the paper demonstrates that E3 ligases are not specific proteins, because a same substrate can bind to more than one E3 ligase.

The paper says:

Significant degrees of redundancy and multiplicity. Any particular substrate may be targeted by multiple E3 ligases at different sites, and a single E3 ligase may target multiple substrates under different conditions or in different cellular compartments. This drives a huge diversity in spatial and temporal control of ubiquitylation (reviewed by ref. [61]). Cellular context is an important consideration, as substrate–ligase pairs identified by biochemical methods may not be expressed or interact in the same sub-cellular compartment.

I have already commented elsewhere (in the Ubiquitin thread) that the fact that a substrate can be targeted by multiple E3 ligases at different sites, or in different sub-cellular compartments, is  clear evidence of complex specificity. IOWs, its’ not that two or more E3 ligases bind a same target just to do the same thing, they bind the same target in different ways and different context to do different things. The paper, even if very interesting, is only about detecting affinities, not function.

That should be enough to stop the laughs. However, I will add another simple concept. If E3 ligases were really redundant in the sense suggested by DNA_Jock and friends, their loss of function should not be a serious problem for us. OK, I will just quote a few papers (not many, because this OP is already long enough):

The multifaceted role of the E3 ubiquitin ligase HOIL-1: beyond linear ubiquitination.

https://www.ncbi.nlm.nih.gov/pubmed/26085217

HOIL-1 has been linked with antiviral signaling, iron and xenobiotic metabolism, cell death, and cancer. HOIL-1 deficiency in humans leads to myopathy, amylopectinosis, auto-inflammation, and immunodeficiency associated with an increased frequency of bacterial infections.

WWP1: a versatile ubiquitin E3 ligase in signaling and diseases.

https://www.ncbi.nlm.nih.gov/pubmed/22051607

WWP1 has been implicated in several diseases, such as cancers, infectious diseases, neurological diseases, and aging.

RING domain E3 ubiquitin ligases.

https://www.ncbi.nlm.nih.gov/pubmed/19489725

RING-based E3s are specified by over 600 human genes, surpassing the 518 protein kinase genes. Accordingly, RING E3s have been linked to the control of many cellular processes and to multiple human diseases. Despite their critical importance, our knowledge of the physiological partners, biological functions, substrates, and mechanism of action for most RING E3s remains at a rudimentary stage.

HECT-type E3 ubiquitin ligases in nerve cell development and synapse physiology.

https://www.ncbi.nlm.nih.gov/pubmed/25979171

The development of neurons is precisely controlled. Nerve cells are born from progenitor cells, migrate to their future target sites, extend dendrites and an axon to form synapses, and thus establish neural networks. All these processes are governed by multiple intracellular signaling cascades, among which ubiquitylation has emerged as a potent regulatory principle that determines protein function and turnover. Dysfunctions of E3 ubiquitin ligases or aberrant ubiquitin signaling contribute to a variety of brain disorders like X-linked mental retardation, schizophrenia, autism or Parkinson’s disease. In this review, we summarize recent findings about molecular pathways that involve E3 ligasesof the Homologous to E6-AP C-terminus (HECT) family and that control neuritogenesis, neuronal polarity formation, and synaptic transmission.

Finally I would highly recommend the following recent paper to all who want to approach seriously the problem of specificity in the ubiquitin system:

Specificity and disease in the ubiquitin system

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5264512/

Abstract

Post-translational modification (PTM) of proteins by ubiquitination is an essential cellular regulatory process. Such regulation drives the cell cycle and cell division, signalling and secretory pathways, DNA replication and repair processes and protein quality control and degradation pathways. A huge range of ubiquitin signals can be generated depending on the specificity and catalytic activity of the enzymes required for attachment of ubiquitin to a given target. As a consequence of its importance to eukaryotic life, dysfunction in the ubiquitin system leads to many disease states, including cancers and neurodegeneration. This review takes a retrospective look at our progress in understanding the molecular mechanisms that govern the specificity of ubiquitin conjugation.

Concluding remarks

Our studies show that achieving specificity within a given pathway can be established by specific interactions between the enzymatic components of the conjugation machinery, as seen in the exclusive FANCL–Ube2T interaction. By contrast, where a broad spectrum of modifications is required, this can be achieved through association of the conjugation machinery with the common denominator, ubiquitin, as seen in the case of Parkin. There are many outstanding questions to understanding the mechanisms governing substrate selection and lysine targeting. Importantly, we do not yet understand what makes a particular lysine and/or a particular substrate a good target for ubiquitination. Subunits and co-activators of the APC/C multi-subunit E3 ligase complex recognize short, conserved motifs (D [221] and KEN [222] boxes) on substrates leading to their ubiquitination [223–225]. Interactions between the RING and E2 subunits reduce the available radius for substrate lysines in the case of a disordered substrate [226]. Rbx1, a RING protein integral to cullin-RING ligases, supports neddylation of Cullin-1 via a substrate-driven optimization of the catalytic machinery [227], whereas in the case of HECT E3 ligases, conformational changes within the E3 itself determine lysine selection [97]. However, when it comes to specific targets such as FANCI and FANCD2, how the essential lysine is targeted is unclear. Does this specificity rely on interactions between FA proteins? Are there inhibitory interactions that prevent modification of nearby lysines? One notable absence in our understanding of ubiquitin signalling is a ‘consensus’ ubiquitination motif. Large-scale proteomic analyses of ubiquitination sites have revealed the extent of this challenge, with seemingly no lysine discrimination at the primary sequence level in the case of the CRLs [228]. Furthermore, the apparent promiscuity of Parkin suggests the possibility that ubiquitinated proteins are the primary target of Parkin activity. It is likely that multiple structures of specific and promiscuous ligases in action will be required to understand substrate specificity in full.

To conclude, a few words about the issue of the sequence space not entirely traversed.

We have 2000  protein superfamilies that are completely unrelated at sequence level. That is  evidence that functional protein sequences are not bound to any particular region of the sequence space.

Moreover, neutral variation in non coding and non functional sequences can go any direction, without any specific functional constraints. I suppose that neo-darwinists would recognize that parts of the genomes is non functional, wouldn’t they? And we have already seen elsewhere (in the ubiquitin thread discussion) that many new genes arise from non coding sequences.

So, there is no reason to believe that the functional space has not been traversed. But, of course, neutral variation can traverse it only at very low resolution.

IOWs, there is no reason that any specific part of the sequence space is hidden from RV. But of course, the low probabilistic resources of RV can only traverse different parts of the sequence space occasionally.

It’s like having a few balls that can move freely on a plane, and occasionally fall into a hole. If the balls are really few and the plane is extremely big, the balls will be able to  potentially traverse all the regions of the plane, but they will pass only through a very limited number of possible trajectories. That’s why finding a very small hole will be almost impossible, wherever it is. And there is no reason to believe that small functional holes are not scattered in the sequence space, as protein superfamilies clearly show.

So, it’s not true that highly functional proteins are hidden in some unexplored tresure trove in the sequence space. They are there for anyone to find them, in different and distant parts of the sequence space, but it is almost impossible to find them through a random walk, because they are so small.

And yet, 2000 highly functional superfamilies are there.

Moreover, The rate of appearance of new suprefamilies is highest at the beginning of natural history (for example in LUCA), when a smaller part of the sequence space is likely to have been traversed, and decreases constantly, becoming extremely low in the last hundreds of million years. That’s not what you would expect if the problem of finding new functional islands were due to how much sequence space has been traversed, and if the sequence space were really so overflowing with potential naturally selectable functions, as neo-darwinists like to believe.

OK, that’s enough. As expected, this OP is very long. However, I think that it  was important to discuss all these partially related issues in the same context.

 

Comments
gpuccio, May the best argument win! And may the one that's wrong admit it! Nonlin.org
Nonlin.org: "So, let’s debate reality and facts!" Been there, done that. "Btw, you are much closer to Darwin than I am" I have no problems in being "close" to Darwin, or anyone else, in the things about which they are right. "Don’t take my feedback as an attack" I don't. "but as a chance to improve your arguments" I always do that with all my interlocutors. Once. Maybe twice. "And I DO want you to succeed! " I just want to find what is true. "Heck, I don’t mind our Theistic Evolutionists friends succeeding if it turns out they are right and I am wrong." Should I bet? You are both wrong! :) Frankly, I appreciate your goodwill and honesty, but I don't believe we can have an interesting intellectual confrontation. We have tried. There is nothing to add. gpuccio
gpuccio, So, let's debate reality and facts! Btw, you are much closer to Darwin than I am since you accept Darwin's "natural selection" as true, despite the overwhelming evidence against: http://nonlin.org/natural-selection/ Don't take my feedback as an attack but as a chance to improve your arguments (feel free to return the favor). And I DO want you to succeed! Heck, I don't mind our Theistic Evolutionists friends succeeding if it turns out they are right and I am wrong. So bill cole on UD is colewd on TMZ! Good to know, but why the different name? Nonlin.org
gpuccio
That argument is not in any way countered by Ewert’s ideas. Indeed, it is perfectly compatible with Ewert’s ideas, but it requires the addition of another important component: common descent. But it is perfectly possible to limit the idea of common descent to the common descent of modules.
Exactly. The idea is not a common descent vs common design all or nothing it is a better way to look at the data using design principles. Your ideas here are great and delivered so promptly :-) bill cole
gpuccio "He certainly shares with them a strong, obstinate and almost obsessive dislike for reality and facts." He has got the TSZ guys running in circles. He pulled out a quote from UC Berkeley that UCD was a working assumption :-) bill cole
bill cole: I think that our friend Nonlin.org would certainly be a perfect neo-darwinist, had his personal idiosincrasies crystallized differently. He certainly shares with them a strong, obstinate and almost obsessive dislike for reality and facts. gpuccio
bill cole @480 I am proposing a better argument because I find Dembski's and gpuccio's arguments lacking as shown. Also, not all Darwinist claims are equally absurd. In this case, "absence of evidence is not evidence of absence". When gpuccio states: "We have a lot of evidence that functional islands are not connected, and that they are indeed islands", the "linked by yet unknown, uncreated, eternal and universal scientific laws" counterargument is actually quite reasonable. Nonlin.org
bill cole: I have read the initial part of Ewert's paper, enough to understand clearly his ideas, even if I still ahve to go into the detail of the testing of the hypothesis. So, I can tell you what I think of the general idea expressed in the paper. My first comment is: I like the idea very much. I like the idea of the reuse of modules, and as you know it is definitely part of my general model: the engineering of biological organisms is certanly and obviously modular, definitely configuring OOP. Of course, I agree that common design is a major component of what we observe. Both in the case of one biological designer, or of many of them, common design is certainly there. And of course I also agree that functional constraints have an important role. Please, look at this recent comment of mine, written before I read Ewert's paper, and you will see that somne of those ideas are certainly part of my personal model: https://uncommondescent.com/intelligent-design/breaking-a-junk-dna-jumping-gene-is-critical-for-embryo-cell-development/#comment-662145 I also like the idea that a dependency graph can explain what we observe even better than a hyerarchical tree. That is definitely possible, and maybe likely. But... there is a but. The but is that Ewert uses these arguments to explain what we observe without using the idea of common descent. But, in reality, he is only countering the argument from nested hyerarchy. And, as you probably know, I have never really thought that the argument from nested hyerarchy is the real important argument for common descent. The important argument for common descent is another one: it is the signature of neutral variation in DNA and proteins. That argument is not in any way countered by Ewert's ideas. Indeed, it is perfectly compatible with Ewert's ideas, but it requires the addition of another important component: common descent. But it is perfectly possible to limit the idea of common descent to the common descent of modules. IOWs, the signature of neutral variation in biological molecules is very strong evidence for their physical continuity, IOWs for their physical descent. But it is perfectly possible that the descent is limited to re-used modules. In that way, a dependency graph would still be better than a tree. To be more clear: Ewert's model does not work if he assumes that modules are re-used only as programming schemes, without any physical derivation from what already exists. While that would explain the hyerarchy, it still would not explain the conservation of accruing neutral variation. Which is the strongest argument for common descent. But if we assume that modules are re-used physically, then it is perfectly possible that they are derived from multiple sources, according to a dependency graph, and not to a tree. The important point is that each module, when re-used, would still carry the signature of neutral variation according to the time it has existed, and has been exposed to it. That would explain both the hyerarchy and the neutral variation. The only way to counter the argument from neutral variation without assuming some form of physical descent would be to demonstrate that all variation is functionally constrained. But I think that, at present, facts are not pointing in that direction. Not at all. gpuccio
gpuccio
Not yet. It seems interesting, and of course it’s a very hot topic. I will read it carefully as soon as possible.
Great. I think it is very complementary to the work you are doing :-). Took me about 3 passes to get my arms somewhat around it. bill cole
nonlin
Of course. All Darwinist arguments are “terrible…and… with no grounding in empirical science”.
So lets not change our arguments to avoid "just so" stories because they can do that no matter what our claims are. bill cole
gpuccio, Of course you disagree ...as anticipated. At least you are aware of the competing viewpoint. Nonlin.org
bill cole, Of course. All Darwinist arguments are "terrible...and... with no grounding in empirical science". Nonlin.org
bill cole: Not yet. It seems interesting, and of course it's a very hot topic. I will read it carefully as soon as possible. gpuccio
gpuccio Have you looked at Ewert's paper? bill cole
NonLin.
. Furthermore, ID opponents can easily counter the functional information argument with the claim that the ‘functional islands’ are linked by yet unknown, uncreated, eternal and universal scientific laws so that “evolution” jumps from island to island effectively reducing the search space from a ‘vast ocean’ to a manageable size.
This is a terrible counter argument with no grounding in empirical science. bill cole
Nonlin.org: I have read it. And I completely disagree. For reasons that we have already discussed. My position is simple enough: you are wrong, but there is no hope to have a discussion with you that is useful. That's all. gpuccio
gpuccio, You won't like this one bit, but should read it anyway as it is an alternative (I say 'better') to your analysis http://nonlin.org/intelligent-design/ Nonlin.org
gpuccio: I see your point. Thank you very much. OLV
OLV: Yes, I had looked at that paper. Frankly, I find it rather confounding. The concept of molecular complexity is not very clear, and it is not clear how it relates to the problem of origin. The computation of informational complexity is not based on the function, and is therefore rather irrelevant for our perspective. I don't understand what type of practical application can be derived from the measures described in this paper. It just says that some structures are more complex than others, but it says so in terms of absolute complexity. Absolute compexity is interesting, but it is not a proper metrics that we can relate to explanatory theories. gpuccio
gpuccio: Please, would you comment on this paper at your convenience? Maybe you already did it somewhere else, but I missed it. Thanks! https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5794832/#!po=0.704225 OLV
gpuccio, That's fine. No need to apologize. Thanks for helping me refine my argument. I honestly wished I could say your arguments make more sense :) Nonlin.org
Nonlin.org: I am not frustrated, but I will not follow you along these lines of reasoning. I apologize in advance, but I really believe that what you say has no sense, and I have not further time to waste. Nothing personal, believe me. You have all my goodwill and, if you want, friendship. But enough is enough. gpuccio
gpuccio@463 and 464 You skipped AGAIN over: “If you disagree, explain what is your “phenotype” and “environment”, and what in this infinite combination of infinities keeps you alive. And while at it calculate your “fitness” function too.” I will take your silence as an acceptance of my point about the unknowable infinite ‘phenotype’ and ‘environment’ and the meaningless of the “fitness” concept! I don’t see why a DNA change cannot be an adaptation even by your definition of ‘adaptation’ or by this other definition: “a change or the process of change by which an organism or species becomes better suited to its environment”. Also, many mutations come in groups and many are caused by mutagens – this clearly indicates their not-entirely-random nature (and again, DNA looks nothing like a fair coin/die to expect mutations to be random). I can see an adaptation algorithm accounting for ab-resistance, sickle cells, etc. as long as the goal is to try to save the species (not the individual). Of course this algorithm is not written in DNA because “DNA is not essence of life”, remember? You also say: “the resistance has been cleraly shown to depend of one or meore random mutations, which happen exactly with the frequencey that we expect from random mutations”. Because random generators are often incorporated in designed search algorithms, it doesn’t mean that what happens is necessarily random. We’re not just talking semantics here, the words and concepts behind them matter. My point was you will never know who is selected other than by looking at survival to which you reply with “It is differential survival”. But, since we’re all different, how can survival be anything other than “differential”? Also remember the example of the three organisms, all surviving or not regardless of their phenotype. What you call “selection” is nothing more than a retrospective story your brain builds - you do know the brain is very good at story telling, right? I know how pos/neg selection are supposed to happen. But we have seen that all mutations are tradeoffs including ab-resistance and sickle cell. How so? Because they don’t spread in the population but disappear instead when stimulus is removed. I also see them as adaptations and not “selection” per above. “Negative selection” is also problematic because: 1. It presupposes a standard but there is no human standard (pygmy, sumo wrestlers, bulimic, ADHD, epileptic, white, black, BRCA, Huntington, hairy, blue eyes, etc.) and if so, then no standard for any other organism either. So who deserves the “negative selection” treatment then? 2. You claim “many genetic disease become gradually more rare”, but where’s the evidence? Is this because humans take an active role in disease management? 3. Yes, many with “defective traits” die before reproduction …unless of course some farmer prefers them like that, or some scientist keeps them alive for study purpose :) …while killing their uninteresting ordinary cousins. “But this wouldn’t happen in nature”, right? Except the peacock, deer buck, etc. So how do we know they are “defective”? Because they die. And why do they die? Because they are “defective”. See? Circular logic, not “selection”. 4. This doesn’t follow and is confusing: “that’s why some proteins retain their sequences through long evolutionary times”. Again, this is the “just so” story telling brain at work. To summarize, your position is that “evolution” is true but partially guided? If so, I dispute the separation between guided and unguided and I dispute the unguided story. Sorry, I see your frustration, but my goal is to clarify the arguments and help all sides better understand what’s going on. Nonlin.org
Mung: I am happy you like the metaphor! :) I don't use paperweights, but I do like them. :) gpuccio
gpuccio:
The WT is the finely crafted paperweight. The solutions found in the Hayashi experiment are just stones.
:) Mung
Nonlin.org at #460: "One more thing, what passes for “natural selection” is simply survival (disagree with proof if any)." I don't disagree. It is differential survival. "So what do you mean: “there is a lot of negative selection going on”?" Let's go back to the basics: a) Positive selection: it's what happens when a new trait, arisen by RV, has a reproductive advantage on the old trit in the population, so that gradually the new trait expands in the population and takes the place of the old trait, which is gradually eliminated, as a result of differential reproduction. This process exists, even if it is not so commen as darwinists believe. Antibiotic resistance of the simple type is a good example of positive selection. b) Negative selection: it's what happens when a functional trait is affected by some random variation that reduces significantly the functionality, and the reproductive rate. In this case, the new trait is subject to negative selection, and disappears, more or less gradually, while the old trait is conserved in the population. That's what happens all the time, each time that a deleterious mutation happens. That's how many genetic disease become gradually more rare, unless they are supported by other factors (like a high mutation rate at the specific site, or environmental factors like malaria for the sickle cell trait). That's why many babies with serious genetic diseases die when they are still very young. More generally, that's why some proteins retain their sequences thorugh long evolutionary times, while others change a lot. The proteins that are conserved have higher functional specificity in their sequences, and therefore most of the variation that happens is deleterious, and is eliminated. That's what is meant by the term negative (purifying) selection. I hope that is clear. gpuccio
Nonlin.org at #459: I don't wnat to go on infinitely with you. I will try to address a few more points, but please, let's stop at some level! :) "Retrospective “explanations” are worthless since we cannot rerun natural experiments." I disagree. We can learn a lot from retrospective analyses, even if we cannot rerun experiments. We disagree on that. Let's stop it here. "You must explain this: “Disagree: the form of penicillin resistance that I mentioned is not an adaptation.”" An adaptation is a scenario where some pre-existing algorithm works upon information from outside, and uses it by some internal computation which generates a specific result. I think that plasmids in bacteria are adaptational tools, for example. So, when the penicillinase gene is passed from one kind of bacteria to another kind, because penicillin is in the environment, that's an adaptation. But in the case of simple penicillin resistance, the resistance has been cleraly shown to depend of one or meore random mutations, which happen exactly with the frequencey that we expect from random mutations, and which have the power to modify an existing protein making the bacteria more resistant to the antibiotic. Nothing in tha scenatio has any feature of an adaptation. What happens is not, in any way, dependent on any defensive algorithm in bacteria. It depends only on RV and the resulting random reproductive advantage in that specific environment. By the way, this is an "experiment of nature" which can be easily rerun in the lab. "Now you disagree even when you agree (on DNA)" I disagree that your statement is any criticism to something that I have said. "‘Selection’ is the wrong word. No one selects phenotypes except humans, and even that is different than the Darwinian story. All other simply seek to survive including the predators that just need to eat. Why would ‘seeking food’ and ‘defending oneself’ be called “selection”? It makes no sense whatsoever." I can agree that the word is not perfect, but that's how the process is usually called. I have no problems with words, and I accept the current use of words, provided that definitions are clear. I have said very clearly what I mean by "NS", and it's the same thing that darwinists mean. Of course, we deeply disagree about what that "thing" can really do! :) You don't like that it is called "selection". I have no problems with that. Let's say that, with you, we will call it "Anna", from now on. With others, I think I will go on with NS. My only interest is to distinguish between NS (which happens in a system without any design interventions) and Intelligent selection, or Artificial selection (they are the same thing for me), which only happens as a result of a conscious intelligent design intervention. Then you go on speaking of Intelligent selection, and how the evolution of species can only be the result of design. I agree. So, what's the problem? "There is no natural “selection” either. " I disagree on this and on everything that follows. Positive natural selection exists, and that can be easily demonstrated. Again, why is sicke cell triat so much more common in malaria regions? I agree that positive selection is limited, that it is not so generally important as darwinists think, and that it act usually on simple traits (like sickle cell disease). But I disagree that it does not exist. We definitely disagree on that. Let's stop it here. gpuccio
Mung: Let's go to Corneel. He seems offended that I give more credit to Joe Felsestein than I give to him. But, unfortunately, that's the law of free market! :) It's not my fault if I find his argument generally boring and wrong. I have not the time to read everything that is writeen at TSZ. Lately, I have only looked at Joe Felsestein's posts, because he had started a rather well argumented thread about my arguments. I admit that I have only given a superficial look at all the other posts, catching something randomly, or only when some friend quoted them here. Again, it's a free market. But, as I have been led by you (Mung) to this post of him (Corneel), I will consider the Wesel statement there, and the two questions that he provides at the end of it about other issues. He says: Anyway, I am perfectly aware that the target string is being evaluated to find the number of matches, but I don’t see why that would prevent us from calculating the functional information of the resulting strings, which is what the whole exercise is about. Because the whole exercise is stupid. And I am being very generous here. If the string is alredy in the system, the simplest way to get it in a random string is to substitute each letter in the initial random string with the right letter from the target. That requires only as many sibstitutions as the string is ling. Or, if you have a printer at hand, you can simple print the target by a single click. Does Corneel really believe that if I have a file with a Shakespeare sonnet, and I print 10 copies of it, I am generating new functional information, indeed 10 times the original functional information. If he really believes that, he is completely out of reach, and cannot be saved in any way. Let's go to the two unrelated questions.
I am very interested whether cnidarians have a functional copy of TRIM62 for example, and how they pull that off without all the conserved information that humans have.
First question: "I am very interested whether cnidarians have a functional copy of TRIM62 for example" Answer: The best hit in Cnidaria is 84.7 bits, and the lowest among the first 100 hits is 54.3 bits. The E-values are 2e-16 for the best hit, 8e-07 for the lowest. Not much, but enough to detect homology, due mainly to the RING finger domain. The proteins involved are labeled as: "tripartite motif-containing protein 3-like [Stylophora pistillata]" or: "E3 ubiquitin-protein ligase TRIM71-like [Stylophora pistillata]" or: "RING finger protein nhl-1 [Exaiptasia pallida]" and so on. So, to answer Corneel's question, TRIM 62 as we oberve it in humans, a 475 AAs long protein with about 1000 bits of total potential functional information in BLAST comparisions, is not present in cnidaria, even if a low homology can be detected with other proteins having a similar RING finger domain. As already said, TRIM 62 appears rather suddenly in cartilaginous fish: E3 ubiquitin-protein ligase TRIM62 [Rhincodon typus] with 823 bits of homology and 80% identities to the human protein. That's a jump indeed! Second question. "how they [cnidaria] pull that off without all the conserved information that humans have" Maybe I am missing the brilliant, subtle wisdom in this question. My simple answer is: because that protein is not needed in cnidaria. What's the problem? Here is a paper about TRIM 62 (aka DEAR1): DEAR1 is a Chromosome 1p35 Tumor Suppressor and Master Regulator of TGF?-Driven Epithelial-Mesenchymal Transition https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4107927/ And here is the OMIM page about the protein: https://www.omim.org/entry/616755 Now, I would suggest an idea to Corneel, hoping that it is not too devastating for his worldview: Is it possible, maybe, that cnidaria are functionally different from vertebrates? Let's see what he thinks. gpuccio
Mung: It was not "a stated goal of the experiment to reproduce the wild-type or even to reach one with a comparable level of infectivity.” But the aim of the study was certainly to verify if they could achieve "improvement in phage infectivity" thorugh their model of RV + NS. They also admit that the wildtype is more or less at the peak of the functional space. Certainly, they wanted to verify if that peak could be reache. Otherwise, why the discussion about the practical impossibility of reaching it? Rumraket then goes on:
Of course it matters where you start, as that has significant implications for what level of function is in the immediate sequence neighborhood. The probability of finding mutations that sit on a slope that leads to levels of infectivity comparable to the wild-type is directly related to where that random initial location is. If you start with a completely dissimilar sequence that has zero similarity to the wild type, and if that sequence happens to sit close to the base of an entirely different hill from the one occupied by the wild-type sequence, then there’s no way selection is going to be able to climb to that other WT-hill, as it will obviously just crawl up the local hill rather than go down and start randomly walking through a neutral or deleterious space. In fact given that Gpuccio accepts DNA_Jocks “holes” rather than hills metaphor it should be obvious that the starting position is critical for what hole the protein will fall into. And if that local hill isn’t as tall as the WT hill, then it’s just stuck there.
Again, he does not understand the problem. If you start with any random sequence that is unrelated at sequence level to the WT, you are as far from the wildtype hole as you can be. But the reason why you will never reach the WT is not that you could be near some other hole, and threfore you will necessarily fall into that hole. Rumracket is ignoring the simple fact that random mutations can quickly carry you away from any nearby "hole". In the Hatashi model, indeed, the effects of NS are always contrasted by neutral drift. If you have alredy fallen into a hole, so that the effect of negative NS is very strong, it will probably overwhelm the effect of drift in most cases (but not in all). But in all other cases, the sequence is completely free to diverge in ant possible direction, especially when it is in near neutral territory. The true reason why you will never find the WT isalnd is because it is too small. Being an highly specific functional island, it is much more unlikely and much smalòler than the big, generic functional islands that were found in the experiment. It requires more specific AA positions (35, according to an estimate by the authors). It has higher FI. That's why you need 10^70 starting sequences to have a good probability of finding it. The holes, or hills, as you like, that were found in the experiment are gross, imperfect solutions, based on some generic resemblance to the WT solution. That's why they can be easily found. They are like the weak ATP binding in Szostak's paper. Weak bindings, imperfect solutions, are much more common in the functional space. It's only higly specific solutions that are extremely rare and extremely small islands, because that have much higher FI. So, it's easy to find a stone that we can use as a paperweight, but a refined, elegant and finely crafted paperweight is certainly designed. The WT is the finely crafted paperweight. The solutions found in the Hayashi experiment are just stones. And there is absolutely no evidence that the WT is a pinnacle with a large functional base that gradually leads to it: no evidence at all, except in the arbitrary drawins at TSZ. If that were the case, it would be easy to find it: we should only find the large base, and NS would do the rest. But in that case, it should be found with the same facility as the other gross solutions are found. There should be no necessiy of 10^70 starting sequences to find it! Therefore, it's a pinnacle. Period. He also says:
But as I have stated now like twenty times, the fact that random proteins seem to invariaby sit near hills that selection can climb (or “holes” for the protein to “fall” into) implies that the infectivity function is ubiquitous in sequence space.
It is not true. Random proteins do not "invariably" sit near hills that selection can climb. In the Szostak experiment, omly a couple out of 10^11 was sitting in such a position. Moreover, please note how he is, intentionally, using the word "selection" here, and not "NS". In the case of Szostak, it was not NS at all. In the case of Hayashi, it was NS indeed. But, as I have explained, the experiment was modeled exactly so that those gross results could be easily obtained. Why? Because it was an experiment of function retrieval, with the function still present, so that NS could already work on it. IOWs, they "damaged" a function, but not completely, so that it was easy to find soultions, simple solutions, that could partiall "repair" the damage, as you can partially improve a bump in the body of your car using a hammer. But you have to change the body part with a new one if you want a real repair. If the "holes" found by Hayashi were real solutions, then why the tweaking by NS did stop at low levels of function? Why did'n it reach functional levels similar to the WT? The simple answer is: because those holes-hills were gross, simple solutions, with no relationship to the refined, functional solution that is the WT. Then he says:
The proportion of holes that go as deep as the one occupied by the wild-type protein is actually a diversion.
No, it's really the problem. The problem is that practically all that we observe in nature is a refined, highly functional solution, involving a lot of functional complexity. And that those highly refined solutions are not, in any way, a gradual tweaking of generic, gross, common simple solutions which have almost no functional relevance at all, and that are not naturally selectable, except in experiments conceived for that, like the Hayashi experiment. IOWs, the weak ATP binding of Szostak would never, never lead to the (useless) high ATP binding in the final Szostak's protein. Why? Simply because the weak ATP binding would never be natuirally selected. NS has not ATP columns at its disposal to measure weak ATP binding in random sequences. So, the alpha or beta chains of ATP synthase, or the UBR5 E3 ligase, or any other example that I have discussed where the FU is extremely high, are not the tweaked gradual evolution of gross, simple sequences. Their function is complex and refined because of its same nature, exactly as a spreadsheet code is complex and refined because it is a spreadsheet, and no spreadsheet can be coded by 20 bits of code! gpuccio
gpuccio@450 One more thing, what passes for “natural selection” is simply survival (disagree with proof if any). So what do you mean: “there is a lot of negative selection going on”? Nonlin.org
gpuccio@450 Let’s see… You didn’t answer this: “If you disagree, explain what is your “phenotype” and “environment”, and what in this infinite combination of infinities keeps you alive.” And while at it calculate your "fitness" function too. You can try anything, but I have yet to see a survival forecast (prospective) that is not just status-quo extrapolation. Retrospective “explanations” are worthless since we cannot rerun natural experiments. Of course you can control the environment which makes you the god of the experiment (ID in action). You must explain this: “Disagree: the form of penicillin resistance that I mentioned is not an adaptation.” Now you disagree even when you agree (on DNA) :) This: “all selection (if any) is artificial (intelligent)” is not “vague and meaningless”. It means rocks do not “select” each other and volcanoes/meteorites/etc. do not “select” organisms. Only organisms “select” themselves or other organisms, but... ‘Selection’ is the wrong word. No one selects phenotypes except humans, and even that is different than the Darwinian story. All other simply seek to survive including the predators that just need to eat. Why would 'seeking food' and 'defending oneself' be called "selection"? It makes no sense whatsoever. Plant and animal breeding is not the “artificial selection” described by Darwin and has nothing to do with any natural process. Breeding requires a desired outcome, selection (just a minor step!) and isolation of successive generations of promising individuals, active mating or artificial insemination, optimization of growth conditions for the selected individuals, and genetic technologies more recently. Without most of these active steps nothing happens. Chihuahua and Poodle have no superior survivability to common dog or wolf, but happened anyway because humans worked hard to make these possible. But who would do all this in nature? How can humans “evolve” separately from chimps when no one separates each and every new generation based on a teleological model? Why would the proto-human not mate back with his/her regular chimp cousin? Who and how could separately optimize conditions for both chimp and human so both lineages survive in what looks like very much similar environments? There is no natural "selection" either. The young, old, crippled, tired, thirsty, hungry, sick, or unlucky prey are all on the menu and, if main course is out of season, anything else would do. The weakest of the hunted is easily identified and eliminated but the strongest survives just like the average. Superior strength has minimal survival advantage! Mating behavior should have little impact as “benefic” mutations arising in one of the beta peers cannot propagate if not tied to aggressivity and if not strong enough to surpass the aggressivity of the current alpha individual that currently parents all descendants. Nonlin.org
gpuccio:
I see that they don’t even understand the Hayashi paper.
Rumraket accepts a correction but disagrees that it was "a stated goal of the experiment to reproduce the wild-type or even to reach one with a comparable level of infectivity." Corneel wants you to address his other questions not related to WEASEL. I'm still trying to pin them down on FI and whether WEASEL can generate FI or cause FI to accumulate. :) Mung
KF: Yes, sometimes it's really depressing. gpuccio
Mung: I see that they don't even understand the Hayashi paper. They don't understand what it says: Here is the relevant quote from the paper: "By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness." They are mentioning the size of the starting library, not the number of trials. That size is necessary to have any probability of finding the right functional island. It's not, as they say at TSZ, a problem of "where you start". Of course you start random, because you don't know where the functional island is. IOWs, they are making again the same error that we observed in the Weasel "argument". Of course, if you already know where the functional island is, or even better what the final solution is, it's really easy to find it. It's always easy to find what you already know. The problem is that the researchers, correctly, started random, because that's the purpose of the paper: to verify if they could find the wildtype starting random, with the described procedure, including RV + NS. And they couldn't. I really don't understand how anyone in his mind can say things like "it depends on where you "start"" (see the linked graph). Of course. If I start for: "Methinks it is like a weasel" from: "Methinks it is like e weasel" the transition is easy enough. I am really tired of this nonsense. gpuccio
GP, oh, boy. That is how far back we are. KF kairosfocus
GP and Mung, they need to take something as simple as a fishing reel apart and try to put it together again in a way that it will work. That will teach them a lot about clumped and scattered configurations of parts, and about the difference between the cluster of configs that will do the work of a reel and those that will not. They may even be able to understand why there are a lot more nonfunctional than functional configs and why it takes intelligence to get the latter. KF kairosfocus
Mung: Incredible! Corneel insists with the weasel: "The weasel demonstrates an increase in FI when more and more matching characters accumulate in the string, because fewer and fewer strings can be found in the total set that can match the acquired level of function." How can he not understand that "matches" can be evaluated only because the information is already in the system, all of it, the whole phrase? How can he be so blind? No FI increases: the total FI of the Weasel phrase is already there. Of course. gpuccio
Mung: "Meanwhile, over at TSZ, they don’t understand Functional Information." Why am I not surprised? :) gpuccio
gpuccio:
I would probably disagree, if I understood what you mean.
The story of my life. :) Meanwhile, over at TSZ, they don't understand Functional Information. Mung
Nonlin.org: "an infinite hence unknowable theoretical entity (phenotype)" Disagree: the phenotype is a well knowable entity, in each specific case. "another infinite hence unknowable theoretical entity (environment)" Disagree: environment is a well knowable entity, in each specific case. Neither phenotype nor environment are infinite entities. "with who-knows-what-else to “explain” survival, a retrospective event." Disagree: survival can be observed. We can try to explain eberything that can be observed, either retrospectively or prospectively. "Antibiotic resistance is simply a built-in adaptation mechanism like all others (color changes, metabolism, size, behavior, etc.)" Disagree: the form of penicillin resistance that I mentioned is not an adaptation. "There’s nothing more magical about DNA mutations than about sun-tanning." Disagree: nobody has ever said that there is anything magical in DNA. "DNA is not “essence of life”" Disagree: nobody has ever said that DNA is the essence of life. It is an important component through which life is expressed. "why exactly do you need “natural selection”" Disagree: I don't "need" NS, I just acknowledge that it exists, and I try to understand what it can and what it cannot do. "there isn’t much selection going on" Disagree: there is not much positive selection going on, but there is a lot of negative selection going on. "all selection (if any) is artificial (intelligent)" Disagree: vague and meaningless statement. NS is not intelligent, if not in the sense that already existent life is obviously intelligently designed, and NS is a consequence of the already existing functional information that allows reproduction and life. But the variation in the cases of known NS is RV, and the NS is a passive consequence of the effects that RV has on reproduction rate. "even the most sustained efforts (those of the humans) have not lead to much divergence" Agree, more or less: and so? You know well that I don't believe that RV + NS can generate any complex functional information. But non functional divergence is certainly possible, even a lot of it. "the larger the divergence created, the more effort it takes to maintain that divergence (think spring extended to maximum)" I would probably disagree, if I understood what you mean. gpuccio
Nonlin: And in the future be more realistic with your scenarios.
My scenarios make abundantly clear that, contrary to your absurd claim (“Natural Selection fails since survival is not directly tied to phenotype …. “), survival and phenotype are directly linked.
Nonlin: What’s your position on “evolution” anyway?
It is not possible to discuss evolution with you, since, absurdly, your concept of natural selection leaves out the environment — which is akin to trying to understand rain while leaving out clouds. Origenes
gpuccio, You're smart and fighting a good fight, but you’re also trying to dispute Darwinism using Darwin's arbitrary framework. Do you understand that this is impossible? Is as bad as fighting Communism with "scientific socialism" arguments... or fighting Nazism while guided by ‘Mein Kampf’. It just won’t work! Did Darwin use the Bible to create his philosophy? Did he use Plato, Aristotle, Spinoza, or Newton? NO! Then why should ID use a Darwinist framework? Sorry, I don’t know how much clearer than this I can be. Please indicate ‘agree’ or ‘disagree’. I showed you all my conclusions (above) and supporting arguments at http://nonlin.org/natural-selection/ . A quick ‘agree’ or ‘disagree’ for each would be very helpful. I disagree with “differential survival due to differences in phenotype” because it combines an infinite hence unknowable theoretical entity (phenotype) with another infinite hence unknowable theoretical entity (environment) and with who-knows-what-else to “explain” survival, a retrospective event. There is absolutely no predictive power in combining those infinities. If you disagree, explain what is your “phenotype” and “environment”, and what in this infinite combination of infinities keeps you alive. I also countered with an example showing three phenotypes each surviving or not surviving INDEPENDENTLY of phenotype. Antibiotic resistance is simply a built-in adaptation mechanism like all others (color changes, metabolism, size, behavior, etc.) There’s nothing more magical about DNA mutations than about sun-tanning. DNA is not “essence of life”: http://nonlin.org/dna-not-essence-of-life/ Darwin needed “natural selection” for his “evolution” myth, but why exactly do you need “natural selection”? No, you don’t need it at all. So feel free to observe that: 1. there isn’t much selection going on, 2. all selection (if any) is artificial (intelligent), 3. even the most sustained efforts (those of the humans) have not lead to much divergence, 4. the larger the divergence created, the more effort it takes to maintain that divergence (think spring extended to maximum) Origenes, See your answers above. What’s your position on “evolution” anyway? And in the future be more realistic with your scenarios. Star Treck/Wars style scenarios are useless in a serious discussion. Nonlin.org
Nonlin @
Huh? The claim – see definition above – was about phenotype only, not about the Atlantic Ocean or Polar winter.
So your claim ...
“Natural Selection fails since survival is not directly tied to phenotype …. "
... left out the environment? Have you any idea how utterly ridiculous it is to write about natural selection and leaving out the environment? Origenes
Nonlin.org: OK, keep your ideas, what can I do? But, for example, start to explain what's your problem with: “differential survival due to differences in phenotype”. In antibiotic resistance, isn't that exactly what happens? Just to understand how you "reason"! gpuccio
Origens: Huh? The claim – see definition above - was about phenotype only, not about the Atlantic Ocean or Polar winter. You’re saying “I kill this organism so how can it survive”. Get it? Even if you were right about “only retrospectively”… but I am right, right? :) See reply to Gpuccio… “does that mean that phenotype is not directly tied to survival?” Absolutely! When you “insert a shark”, you are the god of that system. No “natural selection” there – just “artificial” aka “intelligent” (not so much in this case). Nonlin.org
gpuccio, Here is the first definition from Bing: "Natural selection is the differential survival and reproduction of individuals due to differences in phenotype. It is a key mechanism of evolution, the change in heritable traits of a population over time. The term "natural selection" was popularised by Charles Darwin who compared it with artificial selection, now more commonly referred to as selective breeding." I specifically argue against: 1. "differential survival due to differences in phenotype", 2. "artificial selection" (it's all intelligent aka artificial) 3. "key mechanism of evolution" 4. "the change in heritable traits" (aka divergence of character) beyond reversible adaptations around a mean Be my guest if you agree with them or simply want to recycle their label without retaining their meaning, but you’re wrong either way. I have yet so see “simple beneficial traits subject to positive NS are” NOT “transitory trade-offs”. Your examples certainly don’t cut it. “Negative NS”? Again looks like you’re trying to recycle someone else’s concept but it’s a mess. Mutations do not happen quite “randomly” as far as I can tell. Not when ab and ab-resistance are intrinsic part of the biological arsenal. Also DNA looks like a fair coin to you?!? What about combined mutations. Regardless of how “random” the mutations, bacteria has intelligence and self-selects. There is no one else other than the bacteria itself or the defending organism that selects. No “natural” anyone else. Why would you allow for penicillinase as built in biologic arsenal, but claim “the simple form of penicillin resistance is due to random mutations in a target protein”? Did I mention: “randomness” is simply unknowable mathematically – go check. I got nothing against trade-off mutation, but I do against “benefic” mutation. The key difference is that trade-off will revert after stimulus is removed, whereas “benefic” would be a keeper no matter what. Big difference. Sure, trade-offs are great when you really need them. You don’t understand the “retrospective” issue – the problem is telling a story about the past that will never be duplicated again, hence has no prospective value whatsoever. Last I checked, medicine was all about helping people today and tomorrow, yes with lessons from the past of course (since there cannot be any from the future). But replicability is key in medicine - not so in “evolution”. Nonlin.org
Nonlin @
How is that wrong?
You are completely wrong. And it is so dramatically obvious that you are wrong, that it feels weird having to point it out. Again, your claim is this:
“Natural Selection fails since survival is not directly tied to phenotype …. “
If your statement is true then a rabbit has as much chance surviving in the Atlantic Ocean as a shark, and a Piranha has as much chance surviving the Polar winter as a polar bear. Your position is simply ridiculous.
I give you three phenotypes. Which one will survive? You don’t know prospectively but you do know retrospectively. Only retrospectively. Always.
Even if you were right about “only retrospectively”, which you are obviously not, it still is irrelevant to the point. As GPuccio points out “a lot of good scientific understanding is retrospective.” So, it does not matter, but you are wrong just the same. Let’s consider your own example of artificial selection …
Nonlin: agribusinesses select for chickens with oversize breasts …
… given knowledge of this selection plan, we, obviously, know prospectively that phenotype which lacks oversized breasts does not survive.
Keep in mind the chicken flu can hit too, and then none may survive on the farm. It’s always retrospective!
And an incoming comet can destroy the earth ….; does that mean that phenotype is not directly tied to survival? Does that mean that when we insert a shark and a rabbit in the Atlantic Ocean that we cannot have an informed reasonable expectation which one may survive and which one has no chance at all based on phenotype? Origenes
Nonlin.org at #440: You are really bad at science. A lot of good scientific understanding is retrospective. Medicine is a good example, believe me. We cannot always understand things by a prospective approach. You have really a strange idea of empirical science. gpuccio
Nonlin.org at #437: I don't know what is wrong with you. I say that NS exists as a very limited process. And that is exactly what you seem to say at #437. But at #432 you stated that "there actually isn’t any “NS process” at all". "I don’t know the true story of the “nylon eating bacteria”, but I bet that’s a trade off mutation also." Maybe. Or maybe not. Certainly, they have gained an abundant source of food. Many would trade a lot for that. "Same goes for Lenski’s citrate eating eColi." That's more likely. Because human intelligence was more directly involved, and human intelligence is often misguided. "Just mix these with the general population and remove the stimulus and you should see these trait disappear rather than spread to the whole population as promised by Darwin. Maybe. Maybe not. But that's irrelevant. We are not here to just manipulate the environmnet to demonstrate you points. Whatever happens, if a change of environment changes population rates, that's NS too. Even the return to the previous state would be an example of NS: negative, purifying selection which defends the already existing functional information,. I don't know what is the problme with you. NS exists, and is a well defined projects. I perfectly agree with you that it cannot build new beneficail complex traits, and that many of the simple beneficial traits subject to positive NS are probably transitory trade-offs. And so? The process is real just the same. And it has severe limitations, as I said. Negative NS is much more powerful and we see it acting all the time, each time a negative mutation is not fixed, and is quickly eliminated. Those are two very different statements. Antibiotic resistance is favoured by an environment where antibiotics are present. You say: "Antibiotic resistance is only subject to intelligent selection (itself or the defending organism) and is fully reversible when the stimulus is removed." Why do you call it "intelligent selection"? For prnicillin resistance of the simple type, mutations happen randomly. Facts do supprot that intepretation. NS then favours the expansion of the resistant population. These facts are well known. Why do you deny them? It can be reversible or not, That is not the point. The point is the mechanisms that favours the expansion of a population which, at that moment and in those conditions, can survive better. That's the concept of NS. If you admit that it happens, you cannot say that there is no NS process at all. There is. You say: "It is also impotent in transforming those bacteria into any other organisms." And so? I never said that it can. What's in "very limited process" that you cannot understand? "Both antibiotics and antibiotic resistance are part of the biologic arsenal together with other BUILT IN adaptation capabilities like appearance, metabolism, etc." Not so. This is probably true for plasmidic repsonses, like penicillin resistance due to penicillinase. But the simple form of penicillin resistance is due to random mutations in a traget protein. "They are all limited in scope" Again, what's in "very limited process" that you cannot understand? You can use the word "limited", and I cannot? Why? "and reversible when the triggering stimulus is removed. No “divergence of character” seen." Maybe. In most cases, certainly. And so? The process exists, just the same. With its limitations. "Sickle cell is a trade-off mutation like all mutations." Almost everything in biology is some form of trade-off. What have you against trade-offs? "When you examine closely, you see no such thing as “beneficial mutations”." It depends on how you define beneficial. If bacteria are exposed to antibotics, you would have difficulties in convincing them that the resistant mutation is not beneficial. gpuccio
Origens: How is that wrong? I give you three phenotypes. Which one will survive? You don't know prospectively but you do know retrospectively. Only retrospectively. Always. Keep in mind the chicken flu can hit too, and then none may survive on the farm. It's always retrospective! Your scenario is retrospective, and even then it may fail. What if the woolly sheep is afflicted with some fungus and they all die? What if some of the others survive despite the cold? You can't shake down or even acknowledge your 100% retrospective thinking. Making up stories for past events is not science. "Artificial selection" (aka intelligent selection) is the only type of selection I can see. No wonder Darwin couldn't find any better example for his nonsense. Do you know any selection that is not "artificial" (intelligent)? Do you know any selection that can transmutate organisms? As far as I can tell, humans have really tried hard but have not succeeded. Nonlin.org
Nonlin @438
Did you read this ...
Yes. Why do you ask?
agribusinesses select for chickens with oversize breasts ... As shown, all these different organisms may or may not survive regardless of their phenotype.
Wrong again, in the context of agribusinesses, chickens with a phenotype which lacks oversized breasts do not survive. IOWs, also in this agribusinesses setting with artificial selection, survival is directly tied to phenotype. Why don't you answer my question — see #436? Origenes
Origenes, Did you read this: "In a small farm, only organisms closely related to their wild cousins survive, but agribusinesses select for chickens with oversize breasts and research labs select for populations with specific genetic mutations requiring tight environments to survive. As shown, all these different organisms may or may not survive regardless of their phenotype. The only measure of “selection” is survival – we only know if and organism was selected if it survives and reproduces."? The Darwinist trick revealed: it's a known fact that the human brain has a tendency to make up "explanatory" stories for past events. But the predictive power of Darwinism is zero. Nonlin.org
gpuccio: Verify for yourself: Antibiotic resistance is only subject to intelligent selection (itself or the defending organism) and is fully reversible when the stimulus is removed. It is also impotent in transforming those bacteria into any other organisms. Both antibiotics and antibiotic resistance are part of the biologic arsenal together with other BUILT IN adaptation capabilities like appearance, metabolism, etc. They are all limited in scope and reversible when the triggering stimulus is removed. No "divergence of character" seen. Sickle cell is a trade-off mutation like all mutations. When you examine closely, you see no such thing as "beneficial mutations". I don't know the true story of the "nylon eating bacteria", but I bet that's a trade off mutation also. Same goes for Lenski's citrate eating eColi. Just mix these with the general population and remove the stimulus and you should see these trait disappear rather than spread to the whole population as promised by Darwin. Who knew "evolution" was testable? Nonlin.org
Nonlin @
Natural Selection fails since survival is not directly tied to phenotype ….
So, according to you, in case of a severe winter, the survival of woolly sheep and the perishing of not woolly sheep is somehow “not directly tied” to the sheep being woolly or not. How did you arrive at that innovative conclusion? Your article does not say. Origenes
Nonlin.org: Are you denying that the diffusion of nylon eating bacteria is favoured by the presence of nylon and its derivates in the environment? gpuccio
Nonlin.org:: Are you denying that the sickle cell trait is supported by malaria disease and its geographical distribution? gpuccio
Nonlin.org: Let's be clear. Are you saying that antibiotic resistance is not subject to natural selection during antibiotic treatment? gpuccio
gpuccio @415 What can't you follow? You claim @412 that NS "exists as a very limited process" and I show that there actually isn't any "NS process" at all. I provide a number of claims and a link to the demonstration of those claims. Simple. How can there be a "limited process"? It either works or it doesn't. And it doesn't. It's just Darwin's brain fart - nonsensical like anything else the guy said. Nonlin.org
kairosfocus(429):
prolonged refusal to accept that something is what it is is not a sign of sound thinking. The many years during which objectors to the design inference have constantly wrenched it into a strawman caricature convince me that they cannot acknowledge that it is what it is and bears the warrant it has. So, to maintain their objection to where it points, they distort and count on media and institutional power to keep up the distortion. In the end, that is intellectually and morally bankrupt. But as we see in several current and recent threads, that has little effect on some of the more determined objectors. Sad. KF
Yes, that's a very accurate assessment of the situation. Thanks. OLV
Kairosfocus: So, to maintain their objection to where it points, they distort and count on media and institutional power to keep up the distortion.
Exactly right. There is no honest debate. In this world, the lie rules supreme. Origenes
Origines, prolonged refusal to accept that something is what it is is not a sign of sound thinking. The many years during which objectors to the design inference have constantly wrenched it into a strawman caricature convince me that they cannot acknowledge that it is what it is and bears the warrant it has. So, to maintain their objection to where it points, they distort and count on media and institutional power to keep up the distortion. In the end, that is intellectually and morally bankrupt. But as we see in several current and recent threads, that has little effect on some of the more determined objectors. Sad. KF kairosfocus
Nothing new at TSZ. Just a rehash of well known debunked arguments based on well known confusions about what ID is. Replying to all of that grows tedious very quickly, so I think it's best to terminate my short visit. Origenes
Origenes: "By chance, the function has to be functional for the organism" Of course. Functional and naturally selectable. "In my estimation the “non-functional sequence pathway” requires even more gullibility from the neo-darwinists than the simple to complex pathway." It's an interesting contest! :) gpuccio
GPuccio: That’s why even darwinists prefer, in general, the scenario where variation happens in a non functional sequence, and is therefore neutral. … If the sequence is non functional, variation is neutral and NS cannot happen. … Of course, even if it were found, another little miracle would have to happen: the non functional sequence, now functional, should be suddenly transcribed and translated and give a reproductive advantage, to be seen by NS.
By chance, the function has to be functional for the organism — here we run in a definitional issue WRT ‘function.’ There are zillions of biological “functions” and only a tiny subset is functional to a specific organism. What are the odds that the new sequence is something that the organism can incorporate in its system and is useful?
GPuccio: There is no end to what neo-darwinists are capable to believe.
In my estimation the “non-functional sequence pathway” requires even more gullibility from the neo-darwinists than the simple to complex pathway. Origenes
Origenes: "From simple to complex — from hourglass to watch — without loss of function." Indeed, with a constant increase of it! :) "Obviously, function is a called “function” because it serves a larger whole — the organism. So, changing the function puts the whole in harm’s way. How can one, while driving a car, change (or improve) the motor and retain its function?" It's obviously impossible. That's why even darwinists prefer, in general, the scenario where variation happens in a non functional sequence, and is therefore neutral. For example, a duplicated, inactivated gene. But you can't have your cake and eat it too! If the sequence is non functional, variation is neutral and NS cannot happen. Until the new function appears. And if the new function is complex at its starting level (like almost all new protein functions), it will never be found by RV. Of course, even if it were found, another little miracle would have to happen: the non functional sequence, now functional, should be suddenly transcribed and translated and give a reproductive advantage, to be seen by NS. There is no end to what neo-darwinists are capable to believe. gpuccio
GPuccio: ... there is no rationale to expect a continuous pathway from simple to complex. Moreover, it has never been observed.
One obvious problematic aspect of the Darwinian narrative is that a function needs to be retained along the entire pathway. From simple to complex — from hourglass to watch — without loss of function. Obviously, function is a called “function” because it serves a larger whole — the organism. So, changing the function puts the whole in harm’s way. How can one, while driving a car, change (or improve) the motor and retain its function? Origenes
Origenes: "Intuitively, it seems likely to me that a simple solution is structurally unrelated to a complex solution." Of course. That's what I mean when I say that there is no rationale to expect a continuous pathway from simple to complex. Moreover, it has never been observed. Optimization is all another thing. You can start from a complex solution, and optimize it a little. That's not the same thing as going from a simple configuration to a complex configuration by continuous functional improvements. You can work on an existing watch and optimize it, maybe add some minor improvement. But that's not the same as getting a watch from an hourglass. Any optimization must work on an existing and fucntional configuration. Moreover, what is optimized is the same function that was already in the starting configuration. At most, we can get similar but different functions that are rrelated to the original function: that's the case with smll variattions in the active site of an enzyme, which can change, even significantly, the affinity for specific substrates, even if the bulk of the functional information (the basic folding and structure and active site configuration) remain the same. In those cases, we can speak of simple functional shifts (usually implemented by simpe variations, in the range of 1-4 AAs). Optimization pathways, when they exist, are short, simple, and require a continuous functional landscape in the context of an already existing function. If the starting function is complex, like in all cases of proteins exhibiting more than 500 bits of specific functional information, there is no way to get it from an optimization pathway, because there is nothing that can be optimized: the starting function is completely out of the range of RV, and will never come into existence in an unguided system. gpuccio
Orígenes: It’s a simple trial and error pathway, but it requires some time to work. Why is it so hard for you to accept it? There’s a massive body of literature written on that out there. Just read it yourself. I hope that this helps. Good luck. OLV
GPuccio: But it [natural selection] is used as a deus ex machina, an arbitrary oracle, to explain all that cannot be explained. In particular, to explain the huge amount of complex functional information that is observed in biological objects. Things that it cannot in any way explain. To pretend that the oracle can work, all kinds of weird things are assumed as true, without any attempt to verify if they are really true: for example, that complex functions can start as very simple functions, and that gradual functional pathways exist from simple functions to complex functions.
Indeed, it needs to be argued, rather than assumed, that there is a pathway from a simple solution of function A to a complex solution of function A. Intuitively, it seems likely to me that a simple solution is structurally unrelated to a complex solution. Arguably the problem of mobility can be solved by being round and a slope, but how is this conceivably a step on the alleged gradual pathway towards say a flagellum motor? Arguably time can be measured by a stick in the ground and the sun, but how can we conceive of stick & sun as a step on the alleged gradual pathway towards a watch? What is the gradual pathway from a counting frame (abacus) towards a computer? Origenes
To all: It's also interesting to note that the possible existence of imaginary and never observed pathways for whose existence there is no rationale at all remains their only argument. That's the only thing that has come from Felsestein and Corneel. Again, look at my comment #412. They are invoking their oracle, hoping that prayers will make it real. gpuccio
To all: It's really telling what most commenters at TSZ are trying to do. They usuallly try to simply deny that functional information exists. Now, as the current thread has been initiated by Joe Felsestein, who is much more reasonable than the average there, and who knows perfectly well that functional information exists and is important, and relies on a definition of functional information by none the less than Szostak, they have changed their strategy. OK, functional information exists, but it seems that it is impossible, or useless, to measure it, or to assess if there are useful thersholds in its values that can inform us about its origin. Their arguments are pitiful at best. Again, I exclude Felsestein, who is trying to discuss. But the others, who just state meaningless "ideas" about the 500 bit threshold, which is of course an arbitrary and very high threshold meant to ensure specificity, or simply relive the Weasel, or the cheap tuxedo, are a complete disappointment. Good to know. gpuccio
Joe Felsestein at TSZ:
There’s a conceptual problem there. A sequence can be conserved by natural selection even if the mutations in it are (say) only 1% lower in fitness. The mutant sequence could be the next-to-last step on a path leading to the current sequence. Once you get to it, slightly-lower fitnesses of mutants will eliminate them and prevent the current sequence from changing.
We have already discussed this argument a lot of time ago. Please, read my comment #412 here. You are doing it again: using NS as a deus ex machina to explain what cannot be explained. It is even worse. Your argument is conceived to justify the fact that no empirical evidence is found of the imaginary pathways. IOWs, you are presenting an unsupported idea which contains the justification for being unsupported. IOWs, you are presenting a non falsifiable model. This is not science. gpuccio
Corneel at TSZ proudly walks Dawkins' silly steps:
Or say we lay a blanc sheet of paper on the table and wait for some invisible Designer to write the string down? What would that tell us? Right: nothing. So why should we disregard the effect of NS? You have the weasel program on your computer. Just feed in some more-than-106-character target string and watch “blind search and random sampling” shatter the 500 bit limit: TO BE OR NOT TO BE THAT IS THE QUESTION WHETHER TIS NOBLER IN THE MIND TO SUFFER THE SLINGS AND ARROWS OF OUTRAGEOUS FORTUNE OR TO TAKE ARMS AGAINST A SEA OF TROUBLES ** Gen: 1156 Dif: 0 Fit: 1.0000 Bene: 0.0125 Detr: 0.9625 Neu: 0.0250 Unchanged: 13 Whaddaya know: 168 characters in a little over a 1000 generations. It doesn’t work without natural selection, Mung.
So, this is his idea of NS: having an oracle that already knows the solution. As I have already said, if you already have the full solution ready, why pretend to find it in 1156 generations? Isn't that completely stupid? This of course is not NS: it is only natural renunciation to being rational! gpuccio
Origenes: They are darwinian bits. There's a special flavour to them! :) gpuccio
Nonlin.org: I just cannot follow you. Not much to say. gpuccio
Reflecting on GPuccio's OP Isolated complex functional islands in the ocean of sequences: a model from English language, again, I find it ironic that, while are arguing against ID and the 500 bit rule, Darwinians produce millions (zillions?) of bits of complex functional information in English language. How do they explain this fact? What exactly is, according to Darwinians, the difference with biological information in general? Why is brain chemistry, with its relatively short evolutionary history, capable of creating so much more information than ‘non-human chemistry’? I am just asking … Origenes
gpuccio @412 You're very wrong! There is no such thing as "natural selection". All there is is Intelligent Selection which is impotent as demonstrated by humans unable to "evolve" anything. http://nonlin.org/natural-selection/ Here's a summary for you: 1. Natural Selection concept fails since phenotype does not determine survival which is also tautological with “best adapted” 2. “Blind, mindless, purposeless, natural, and process” qualifiers fail 3. Phenotype is an unstable infinite set (hence unknowable and theoretical) 4. Fitness concept is redundant since never defined independently of survival 5. “Selection” is Survival 6. The only selection is Intelligent Selection - always done by an Intelligent Selector 7. Selection is limited to a narrow set of adaptations – one cannot selected what is not there 8. Selection and Mutations lack creativity, therefore cannot explain body designs 9. We do not observe “divergence of character” but ‘limited variations around a mean’ 10. Extinct organism were not flawed and their features were not “selected away” 11. Intelligent Selection should replace Natural Selection but only if we ever transmutate organisms 12. Humans do not apply Natural Selection because it doesn’t work 13. Designs must cross an inevitable optimization gap making evolution impossible "Natural selection" proponents must answer these simple questions - pick any biologic entity including populations and give the 80/20 Pareto without too much accuracy or precision : 1. What is that biologic entity's phenotype? 2. What is its environment? 3. What is its fitness function? 4. What is the relationship between its phenotype, environment, fitness, and survival/reproductive success? The five full retard claims of “natural selection” 1. “Design by multiple choice” is full retard 2. “Multiple choice from ALL random answers” is full retard 3. “Designing without trying” is full retard 4. “Self design” is full retard 5. “Design by incremental optimization” is full retard Nonlin.org
To all: The problem with NS is very simple. It exists as a very limited process, well observed and studied in the few cases where it is really important. But it is used as a deus ex machina, an arbitrary oracle, to explain all that cannot be explained. In particular, to explain the huge amount of complex functional information that is observed in biological objects. Things that it cannot in any way explain. To pretend that the oracle can work, all kinds of weird things are assumed as true, without any attempt to verify if they are really true: for example, that complex functions can start as very simple functions, and that gradual functional pathways exist from simple functions to complex functions. When IDists remark that no such pathways have ever been observed, the simple answer is: but have you proved mathematically that it is impossible? :) The simple truth is that a really complex function (more than 500 bits of functional information) is such not because it has been optimized from some very simple and humble beginning, but because it does require 500 bits (at least) to be implemented. You cannot derive Excel from Word by simple byte substitutions, gradually increasing the function. Or, if you like, from any random non functional sequence of bits. You cannot derive a Shakespeare's sonnet from the Don Quijote (excuse me, David Berlinski! :) ). Or, if you prefer, from any random sequence of letters. You cannot derive UBR5 from ATP synthase, or, if you prefer, from any random sequence of AAs (or better, of coding nucleotides). That is true, whatever DNA_Jock, or Joe Felsestein or others, may try to argue. It is true, and everyone of us knows that it is true. Because nobody in his mind would ever try to do that. All these discourses about optimizations, alternative solutions and imaginary pathways are only ruses to confound what should be obvious. The complexity of known starting functions that are then optimized by NS is usually 1 AA, sometimes 2. Let's say that up to 5 we can reasonably accept that it happens, even if exceptionally. 35 AAs is the number estimated in the Hayashi paper, which would require 10^70 starting sequences to be found. 624 (290 + 334) is the number of AAs conserved between the alpha and beta chains of ATP synthase in e. coli and in humans. 2227 is the number of AAs conserved between human UBR5 and 100 UBR5 sequences in fish. These are facts. How many gradual naturally selectable pathways have been proposed in reality to explain those facts? The answer is quite simple: none at all. Of course NS exists. Under extreme environmental constraints, it can optimize the 1 AA starting function of penicillin resistance by adding 4-5 AAs. Or the 2 AAs starting function of chloroquine resistance by adding a couple of AAs. Nothing different. And what can it do to generate complex functions? Nothing at all. Because the starting function is too complex. Because the optimization is trivial, and anyway it requires the starting function. So, what is NS in regard to complex functional information? A deus ex machina. An oracle. A myth. Nothing else. gpuccio
Origenes: I absolutely agree with you. I have nothing against the idea of many universes (that does not necessarily mean infinite universes). If there are reasons to believe such a theory, and facts that support it, that's fine. But an infinite multiverse used only to recycle the old idea of monkeys typing Shakespeare is a really silly solution. Monkeys were much better! :) gpuccio
VJTorley: Viewed in this way, if one views these zillions of other possible universes as other ways in which this universe might have been, it makes perfect sense (philosophically speaking, at least) to disregard the existence of a chemical pathway leading to the formation of the first cell in our universe, and to look at the set of all possible universes instead. One could thus reason that even if abiogenesis is chemically inevitable in our universe, that merely invites the question of how the initial setup and laws came to be “just right.” Of course, a multiverse theorist would respond that all these other “possible universes” are actually real, and that we just happen to be living in a universe where a pathway to life (and intelligent life) exists.
These zillions of other similar universes, all suffer the same problem as ours. This means that, even if we accept their existence, the production of a complex function of 500 bits has not become any more likely. One universe or a zillion, WRT probability there is no difference at all, since each of those zillion of universes are all equally impotent in producing a complex function of 500 bits. The real problem does not change one bit. Similarly, if one toddler is unable to produce the general theory of relativity, for obvious reasons, then a zillion toddlers do not make its production any more likely. The cardinal point is that a toddler is incapable of producing the general theory of relativity. Here the number of toddlers is irrelevant. If it can be shown that our universe cannot, by far, produce a single complex function of 500 bits, then that impotency it shared by any similar universe.
VJTorley: After all, we could hardly be living in one where there was no pathway leading to life, could we?
This assumes what needs to be proved, namely, that such a pathway for undirected processes exists. Origenes
bill cole:
I think your concept of optimum is ok for single proteins such as enzymes but for multi protein complexes where the protein in question is binding with several proteins to perform a function it is not really coherent. The proteins where gpuccio is finding sequence preservation over time generally fall into this category.
I don't think DNA_Jock's concepts are valid for single proteins, and I have answered him in detail in this OP and in the following discussion. For me, there is no difference between single proteins and multiprotein systems. Of course, if a system is IC, the functional complexity must be computed for the whole system, and not only for each protein. IOWs, the functional complexity multiplies (the bits are added). That's all. But if we have a functional complexity of 5000 bits for one system, it's not relevant if the system is made by one protein (see for example UBR5) or by 5 proteins, each of them with a functional complexity of 1000 bits. It's the same thing.
I think the point the Mung is driving home is most relevant. If the total evolutionary resources over the last 4 billion years is 120bits and gpuccio’s design detection bound is 500bits there is plenty of room for yet to be identified islands of function if they exist at all for these protein types.
Of course. I have already explained why the argument of independent solutions is completely irrelevant, both here, and in my other OP: Isolated complex functional islands in the ocean of sequences: a model from English language, again. https://uncommondescent.com/intelligent-design/isolated-complex-functional-islands-in-the-ocean-of-sequences-a-model-from-english-language-again/ I have also explained here, at comment #122, why observing two independent and alternative solutions for the same function just makes things much worse for neo-darwinists. Of course, I don't expect DNA_Jock to agree: he will go on with his belief that alternative ways to measure time make a watch simply likely in a non design system. gpuccio
To all: Well, I shall wait for further inputs by Joe Felsestein, whose conversation has been correct and interesting up to now. Frankly, I am not really interested in the contributions of many other people at TSZ, and I have not the time to consider all of them in detail. If I catch something really interesting, of course, I will answer. gpuccio
Joe Felsestein at TSZ:
“New” and “original” information is no part of Szostak’s definition of functional information. And in gpuccio’s terminology, “complex” does not mean what we usually think. The organism or molecular structure could be very simple, in colloquial terms, while satisfying the definition of “complex” that gpuccio uses, simply by having the set have probability less than 2^{-500}. And since the level of function in nearby sequences just outside the set might be nearly as high as in the set, there would be no implication that you need to get all parts exactly right to have any function.
That's because I use compex functional information to infer desing, while Szostak does not do that. That requires some special attention in the application of the concept, to be sure that we will not have false positives. I have explained in detail what new and original means, and why those two concepts are very useful to ensure a correct design inference. You seem not to understand what ID theory is about: we want to infer design correctly, when we infer it. No false positives. We don't want to infer design in all cases where things were designed. That is impossible. IOWs, false negatives are perfectly fine. That's why we set rules that ensure high specificity, but not high sensitivity. Frankly, I don't understand your statement that: "The organism or molecular structure could be very simple, in colloquial terms" An object with more than 500 bits of functional information is not simple, IMO. Of course, there are objects much more complex than that. But all of them are designed. You concept of simplicity "in colloquial terms" is not clear at all. I have already answered in detail about the issue of function levels. gpuccio
Joe Felsestein at TSZ:
In Szostak’s 2003 paper he makes it clear that the function is a number (say the rate at which a given reaction is catalyzed by that protein). The number exists for every sequence, and is not necessarily zero. You seem to be assuming that there is a set of sequences which has the function, and the rest do not have it. That is the case that gpuccio is interested in, and such cases can exist. However there can also be cases where the number that is the function is nonzero outside of the target set. Szostak asks us to set a threshold value of the function — the set which is the target is then all sequences whose level of function is above that. So lower nonzero levels of function can exist outside of that set, and there can be uphill paths into the set.
I have already answered this previously, but it can be useful to insist a little. The problem is not to have "nonzero functions". The problem is to hacve functions that are relevant in the context we are considering. That's why we set thresholds. So, for example, an ATP synthase must be able to generate ATP with some definite efficiency, to be useful. If a molecule were able to generate one molecule of ATP un one billion years, just to make an extreme and unlikely example, I really can't see how that could be useful to any living cell. As I have alredy said, for the neo-darwinian model it is essential that the function must be already present at least at a level which can give a reproductive advantage, otherwise it cannot be subject to NS. That is no small requirement, and one of the biggest limitations of NS. O=f course, our thresholds are somewhat arbitrary, but that does not mean that they are not realistic. When I compute functional information, I use an indirect method based on long evolutionary conservation. That means that the protein we are observing, as it is, requires the information that has been conserved to be functional. And, of course, I use a metrics (the Blast bitscore) which almost certainly underestimates functional information. Now, if we look at the alpha and beta chains of the F1 subunit of ATP sythase, just to go back to an old example, you can see that there is a lot of conserved information there, and that it has been conserved for a very, very long time. Thanks to negative NS, of course. Of course, darwinists will believe, with their usual blind faith, that those hundreds of conserved AAs can be found gradually, IOWs that there are many, many versions of the F1 enzymatic structure that can generate ATP, each of them differeing from the older version by 1-2 AAs, each of them gradually more functi9onal. That's what I call a myth. Because what reasons have we to believe that weird thing? a) We have absolutely no rationale for that idea. As far as we know from what we observe, ATP synthase requires those hundreds of AAs to work as it does, because negative NS has not allowed deviations from that bulk of functional information. And that is perfectly reasonable, because a fine machine as the F1 subunit, a cylinder with three specific sites, each of them undergoing three different conformational states, in oreder, as the consequence of the rotation of a rotor which conveys the energy derived from the flux of protons between two different compartments, is certainly not a result that you can get with some arbitrary configuration of a few AAs. It's like a supremely refined machine, and it's absolutely rational that a lot of specifc and precise information is necessary for it to work, at any level of efficiency. So, no rationale at all for believing in the existence of a lot of "simpler" and "gradual" versions. b) We have absolutely no empirical evidence for their existence. None at all. That's what I mean when I say that the existence of pathways to complex functions is a myth: a myth unsupported by either a rationale or any empirical evidence. A myth completely irrelevant to science. It is not necessary to prove mathematically that those pathways do not exist: there is no reason for them to exist, and they have never been observed. That's more than enough. And we have thousands of independent and unrelated complex functional proteins that need to be explained. NS can explain none of them. gpuccio
gpuccio Here is an argument I made to DNA Jock. I would be grateful for your feedback.
I think your concept of optimum is ok for single proteins such as enzymes but for multi protein complexes where the protein in question is binding with several proteins to perform a function it is not really coherent. The proteins where gpuccio is finding sequence preservation over time generally fall into this category. I think the point the Mung is driving home is most relevant. If the total evolutionary resources over the last 4 billion years is 120bits and gpuccio’s design detection bound is 500bits there is plenty of room for yet to be identified islands of function if they exist at all for these protein types.
bill cole
vjtorley at TSZ:
If you’re confining yourself to this universe without regard to any ways in which its initial conditions or laws could have been different, then I would agree it’s impossible to defend the 500-bit rule, as it stands.
Why? Counter-examples?
You mentioned ubiquitin. An ID proponent who was a “front-loader ” (say, someone like Mike Behe) could argue that even though it appeared long after the first living things on Earth, the initial conditions of the cosmos were deliberately set with an eye to guaranteeing its emergence approximately 2.7 billion years ago, about 11 billion years after the Big Bang. I don’t know if that’s gpuccio’s view.
Absolutely not, of course. I have never believed in front-loading. I believe in explicit design interventions in the course of natural history, as I have said many times. And I am not sure that Behe believes in front-loading. It is true that he considers it as a possibility in the last part of TEOE, but that does not mean that it is his real view of things. gpuccio
Joe Felsestein at TSZ:
I don’t think that the argument involves choosing universes out of sets of possible universes. For example, gpuccio’s ubiquitin example is a protein that arose long after the origin of the universe, and long after the origin of life, as it’s present in all eukaryotes but not in prokaryotes. In discussion of the 500-bits rule, we are asking whether it applies in our universe, without regard to where else it might apply.
Perfectly correct! :) gpuccio
vjtorley at TSZ: May 22, 2018 at 4:25 pm Hi, VJ. How are you? I am not sure that I understand your reasoning here. I definitely agree with your penultimate paragraph: that describes my thought correctly enough, in brief. For the rest, I have never been interested in the multiverse idea. It's not that I don't believe that other universes exist: they may well exist, and they are probably designed, exactly like ours. The simple fact is that we don't know, and cannot know. At least for the moment. That said, the question is IMO completely irrelevant to my argument about biological design. Biological design is scientifically evident in this universe, and on this planet. That's more than enough, for a scientific theory. gpuccio
Neil Rickert at TSZ, again: "We need a very clear and precise definition of “designed”." I have given mine a lot of time ago, in my first OP: Defining Design https://uncommondescent.com/intelligent-design/defining-design/
Design is a process where a conscious agent subjectively represents in his own consciousness some form and then purposefully outputs that form, more or less efficiently, to some material object. We call the process “design”. We call the conscious agent who subjectively represents the initial form “designer”. We call the material object, after the process has taken place, “designed object”.
gpuccio
Joe Felsestein at TSZ:
Where it comes from is straightforward — it is descended from Seth Lloyd’s computation of the number of possible changes of state in the Universe from its beginning. It is used to argue that any random search that simply makes random samples will be unable to find any event whose probability is that small. It is used to rule out processes like random mutation because they are unable to find configurations that are that improbable. The argument is, however, unable to rule out natural selection as it does not carry out pure random sampling.
Perfectly correct! :) gpuccio
Neil Rickert at TSZ gives us this pearl of thought:
If 500 bits of information reliably indicates design, but 499 bits doesn’t, then it must follow that 1 bit makes all the difference. (This is, roughly, the heap paradox).
So, apparently we should never categorize a continuous variable using a threshold. Good to know. He should probably share that concept with all the scientists who do exactly that, day after day. gpuccio
gpuccio
I wonder how Joe Felsestein would try to explain that. Maybe just restating that he is not “very knowledgeable about biochemistry”?
I think Joe believes you have a legitimate argument or he would not take the time to challenge you. He is very interested in the subject of genetic information. The case Corneel is making is that you are bypassing natural selection. I have interpreted your hypothesis that if there are selectable sequences they are part of the bit calculation. I also made the point to Corneel that he needs to show evidence that a protein family that has a unique sequence and function has a selectable path. I am glad you agree with my conclusion that evidence of precise preserved sequences is evidence for design. RMNS has no viable explanation how these sequences formed in nature. bill cole
bill cole: "Finding highly optimized sequences in nature reinforces the design argument IMO." Yes, of course. And UBR5 is about something like 2500+ AAs optimization! I wonder how Joe Felsestein would try to explain that. Maybe just restating that he is not "very knowledgeable about biochemistry”? gpuccio
gpuccio
As you can see, I am trying to answer Joe Felsestein directly.
Thats great thank you :-)
Have you seen my latest results about UBR5?
Yes, and it clearly reinforces your argument. If I think back to the Hayashi paper a question in my mind is how did the protein arrive at a configuration that Hayashi estimates 10^70 trials to achieve? This is less then 500 bits but it is more then the estimated number of evolutionary trials at 10^43 or around 120 bits. Finding highly optimized sequences in nature reinforces the design argument IMO. bill cole
bill cole at #393: As you can see, I am trying to answer Joe Felsestein directly. You say: "I think you may be selling your self short that there is no mathematical confirmation as there are clearly too many trials and too few resources if a system really contains 500 bits of information." I am not selling myself at all. I am not a mathematician, and I cannot provide a mathematical demonstration that complex functional information cannot be generated out of design because it is mathematically impossible. I think it will probably be proved sometime. I think Dembski is really trying to do that, but I am not sure if he has succedeed. But that is not my concern. Of course, proving that it is empirically impossible in all known cases is all another thing. That is my concern, definitely. You say: "The only question in my mind at this point is how accurate is your measurement based on your method of choosing preserved sequences." It is absolutely reliable. "Accurate" is not the right word, because I am not pretending that it is absolutely precise. But it does measure what it is intended to measure, and with great reliability. Indeed, I am rather sure that my method underestimates functional information. Have you seen my latest results about UBR5? Here: Isolated complex functional islands in the ocean of sequences: a model from English language, again. https://uncommondescent.com/intelligent-design/isolated-complex-functional-islands-in-the-ocean-of-sequences-a-model-from-english-language-again/ at comments #85, 86, 108, 110, 119 and 124. gpuccio
Joe Felsestein at TSZ: OK, I have finally read your atrticle at TSZ. So I will try to answer your points. First of all, thank you for considering my arguments with serious attention. I appreciate that. I would like to csolve immediately one problem which is rather simple: yes, you are right in thinking that I do not rely on Dembski's Law of Conservation of Complex Specified Information in my resonings. And, as I have alredy said in my comments #382 and #383, I don't want to give a mathemathical theorem that demonstrates that complex functional information can only be generated by design. My reasonings are completely empirical. So, I think that answers you point 1 and Possibility 1. Of course, I agree with many of Dembskis ideas on other important points. And I am not satying that I disagree with his Law of Conservation of Complex Specified Information: I am simply saying that I do not rely on it for my arguments. OK? So, now I will give brief answers to your final questions, and then add some reflections in more detail. Just to start the discussion. Your questions: 1. Is your “functional information” the same as Szostak’s? 2. Or does it add the requirement that there be no function in sequences that are outside of the target set? 3. Does it also require us to compute the probability that the sequence arises as a result of normal evolutionary processes? My answers: 1. Yes, I think so. The fact that Szostak does not use it to infer design does not mean that the concept is not the same. 2. It is computed for one explicit definition of a function, including a definite level of it. Therefore, all the sequences that do not satisfy the definition are not in the target set. I think that, too, is the same as what Szostal suggests. 3. It only requires that there is no evidence that an evolutionary process can do it. Such evidence would falsify the theory and the procedure of design inference, as I have said many times. Maybe the third point requires some more detail. As you certainly know, Dembski's explanatory filter has the explicit requirement that no known necessity mechanism can be responsible for what we observe. Of course, that is speically important for results based on order and regularity, and the problem is not really relevant for the type of information that we observe in language, software, machines and proteins. However, neo-darwinism has been claiming for decades that a special type of mechanism based on RV and NS, where NS is the necessity part, can explain that kind of functional information. If that were true, it would of course be a falsification of the design inference, at least for biological objects. That's why ID has to deal with the neo-darwinian model: to show that it is no credible explanation for the complex functional information in biological objects. You say that you are not "very knowledgeable about biochemistry", and that you will "happily leave that argument to others". But it's an important part of the discussion. Why? Because my statement, the statement upon which ID is essentially founded, is that no object with 500 bits of functional information can be generated in a non design system. It is a strong statement, one that invites all to provide even one single counter-example. The argument is completely empirical: no such object has ever been observed to arise without a design intervention. You will not find any exception, anywhere. I have also explained that, to avoid wrong interpretations, we must refer to new and original complex functional information. And I have explained in detail what it means: new = the sequence information must be unrelated to what already exists in the system original = the functional specification must be a new function, and not only a tweaking of an existing function I have also explained that order coming from necessity laws cannot be considered complex functional information (for example, an ordered sequence of heads which can be explained by an unfair coin). But these are all minor clarifications, just to avoid the usual misinterpretations of the concept. You may also want to read what I say at #382 about the computation of pi, just to have some other information about my position. I think that my position is better represented in you Possibility 2. But with some important clarifications. I never, never use "function" as a generic word. Everything has some function. But that's not what is discussed here. I discuss complex functional information, and it is always computed for one explicitly defined function, including a minimal level of it. You say: "gpuccio does not rule out that the region could be defined by a high level of function, with lower levels of function in sequences outside of the region, so that there could be paths allowing evolution to reach the target region of sequences." And then you quote some reflections of mine about that. But you seem not to understand my point. My point is that for complex functions there is no path that leads to them. It is true that we have to set a minimal level of function to define it and to compute the related functional information. But that is not to isolate a peak of high function from gradual lower levels. It's to define any relevant level of function, and distinguish it from what is essentiall irrelevant function. For example, look at my recent OP: Isolated complex functional islands in the ocean of sequences: a model from English language, again. https://uncommondescent.com/intelligent-design/isolated-complex-functional-islands-in-the-ocean-of-sequences-a-model-from-english-language-again/ Consider my example of paragraph P. Of course, we can define a broader function for it, like for example "being made of English words". That gives us a bigger functional island, of course. But, in our context, the important thing is that paragraph P must convey some specific and correct onformation about the issue that is debated there. It's of no use to have a paragraph that is made of English words, but does not mean anything. Or that just conveys information about a soccer game. That's why we define function as an upper tail, as Szostak correctly suggests. Because we are making empirical science, not philosophy or mathematics. We are interested in the real thing, in results, not in abstract discussions. So, we define what is really functional in the context, and give a way to measure its minimal useful level. In the case of a neo-darwinian model, of course the only useful function is: a) That the variation can be naturally selected AND: b) That the variation is building the final functional sequence. There is not one single example of such a pathway that can lead to a new and original complex protein. Those pathways simply do not exist. They do not exist for language, as they do not exist for software. And they do not exist for proteins. This is not a theorem. It is an observed fact. Falsifiable, of course. Please, falsify it. In science, we base our inferences on facts. Not on theorems. Facts rule. That's just to start. I will go on as soon as I have time. By the way, I had asked you to comment on my model (the thief and the safes) which was an explicit criticism to a very important point proposede by you. I see no answer to that in your article. Why? And yet, it is absolutely relevant to the discussion here. gpuccio
gpuccio
I have not restricted anything. A 500 bits function requires at least 500 specific bits to be implemented. So, the function is not there if those bits are not there. If someone can show that the function can be implemented with, say, 100 bits, then the functional complexity of the function is 100 bits, and not 500 bits.
I think this is the key point. Selectable steps are just additional sequences that have function. You would subtract these from the total sequence space in order to get the functional sequence space. If that number is 500 bits then you can infer design. I think you may be selling your self short that there is no mathematical confirmation as there are clearly too many trials and too few resources if a system really contains 500 bits of information. The only question in my mind at this point is how accurate is your measurement based on your method of choosing preserved sequences. bill cole
Bob O'H @ 384: It seems this post would be better suited to gpuccio's "Islands of Function" OP. Anyway:
There is so little empirical support that it was reviewed 4 year ago. More “no empirical support” has accumulated since then.
Per the abstract:
The genotype–fitness map (that is, the fitness landscape) is a key determinant of evolution, yet it has mostly been used as a superficial metaphor because we know little about its structure.
It sounds like your citation is confirming that which you object to, admitting that at the time of this "review", they knew practically nothing.
This is now changing, as real fitness landscapes are being analysed by constructing genotypes with all possible combinations of small sets of mutations observed in phylogenies or in evolution experiments. In turn, these first glimpses of empirical fitness landscapes inspire theoretical analyses of the predictability of evolution. Here, we review these recent empirical and theoretical developments, identify methodological issues and organizing principles, and discuss possibilities to develop more realistic fitness landscape models.
Ok, so they're finally getting around to hammering it out. That's excellent. So, are they finding handy ladders? Or are they finding islands of function? LocalMinimum
Bob O'H:
1. ID is a theory of intelligent design, not evolution.
And yet ID is OK with evolution by design being able to produce IC
What is the clear rationale, supported by known facts, that say that an intelligent designer can’t mimic evolution?
What does that even mean? Clearly after all of these years you still don't know what is being debated here. Again, the intelligent designer mimicked unguided evolution then there wouldn't be any evidence for ID. And if unguided evolution can produce what ID claims required a designer the design inference is falsified due to Newton's four rules, ie science 101 ET
Bob, I only read the last few posts but don't you have it backward? You seem to be asking how can we know when something is designed even though we make design inferences all the time and that ID merely attempts to quantify the qualities of design according to scientific standards. Shouldn't the question you be asking is how can nature turn chaos into high complexity and why should we believe it can? tribune7
Bob O’H at #384 and #386. You say: "What is the clear rationale, supported by known facts, that say that an intelligent designer can’t mimic evolution?" Of course an intelligent designer can mimic unguided evolution, if he so decides. In that case, he will design only simple microevolutionary events, so that his design is not detectable. And so? Then you say: "unless ID specifically claims that an intelligent designer doesn’t mimic unguided evolution (and I’ve been told repeatedly that ID says nothing about the designer), this can’t be a falsification of ID." As said many times, a falsification of ID is to show that some non design system can generate an object exhibiting complex functional information, IOWs an object which would be considered designed according to the ID procedure. IOWs, a false positive. I really cannot believe that you still stick to such blatant errors of reasoning. I believe you are in good faith, and I think you are intelligent, so I really cannot understand why it happens. gpuccio
Bob O'H at #384: Sometimes it seems that you don't even try to understand what we are discussing. If you read (and understand) my comments #382 and #383, it should be easy to see that I am discussing the 500 bits rule, as quoted by Joe Felsestein. I quote the relevant part:
So, if I observe a function that requires 500 bits to be there, there is no gradual way of implementing it. As, in the thief example, there is no gradual way to find the key to the big safe by step by step attempts. The same is true of a new protein, unrelated to existing ones at the sequence level, and with a new function: if the transition to the new functional protein requires at least 500 new bits of functional information, it cannot be achieved by gradual increasingly functional steps. Like the key for the big safe. Of course, darwinists can imagine that some pathway (ladder) exists, maybe passing thorugh completely unrelated functions. But there is no reason to believe such a weird idea, and it has never been observed. IOWs, it is a myth, with no rationale and no empirical support.
IOWs, we are talking of a selectable pathway to a 500 bit new function. In answer to that, you quote a paper which has completely nothing to do with that question. I quote from the paper:
Weinreich and collaborators demonstrated the implications of sign epistasis by constructing and analysing a fitness landscape that involved five mutations in the beta-lactamase TEM, which collectively gave rise to bacterial resistance to a novel antibiotic26. Only 18 of the 120 possible 5?step mutational trajectories from wild type to high-resistance enzyme were accessible under strong selection, and the single most likely trajectory would be used in almost half of the cases (discussed in detail below).
This is the kind of "landscapes" that are discussed in the paper: microevolutionary landscapes, where a few simple transitions twek a simple starting function. In the case of the beta-lactamase, a single starting mutation confers the function, and 4-5 selectable mutations tweak it. This is the exact scenario that I have discussed in detail in my OP: What are the limits of Natural Selection? An interesting open discussion with Gordon Davisson https://uncommondescent.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/ Again from the paper you quote:
The second approach involves the systematic analysis of all possible combinations of a small, predefined set of mutations (FIG. 2). This approach explores a tiny part of genotypic space, but the information obtained is complete and allows the probability of mutational trajectories to be quantified and compared. Below, we focus on systematic studies that adopt the second approach and their use in analyses of evolutionary predictability. Currently, there are <20 systematic studies of empirical fitness landscapes, but this number is rapidly growing10,27. These studies analyse interactions among three17 to a maximum of nine mutations28, which occur either in a single gene17,26,28–38 or operon39, or across genes in a bacterial40,41, fungal22,42 or fly genome43.
As you can see, all those things have no relevance at all to what I was discussing. gpuccio
Bob O'H @ Bob, you are mistaken, ID does make claims about unguided evolution. Perhaps you have not read the following section of the uncommondescent website: ID Defined. For your convenience, the most relevant part:
"The theory of intelligent design (ID) holds that certain features of the universe and of living things are best explained by an intelligent cause rather than an undirected process such as natural selection.
Origenes
Origenes - 1. ID is a theory of intelligent design, not evolution. 2. unless ID specifically claims that an intelligent designer doesn't mimic unguided evolution (and I've been told repeatedly that ID says nothing about the designer), this can't be a falsification of ID. Bob O'H
Bob O'H @384
GPuccio: We have no reasons at all, either rational or empirical, to believe that NS can do that [create a 500 bits function]. Of course, the fans of NS can try to show that it can do it. That would be a faslification of ID.
Bob O'H: What is the clear rationale, supported by known facts, that say that an intelligent designer can’t mimic evolution?
A misguided question for two reasons: 1. ID claims that unguided evolution cannot produce a 500 bits function, so, according to ID, there is nothing to "mimic." 2. ID does not claim that an intelligent designer mimics unguided evolution. Origenes
Of course, darwinists can imagine that some pathway (ladder) exists, maybe passing thorugh completely unrelated functions. But there is no reason to believe such a weird idea, and it has never been observed. IOWs, it is a myth, with no rationale and no empirical support.
There is so little empirical support that it was reviewed 4 year ago. More "no empirical support" has accumulated since then.
Not at all. We have no reasons at all, either rational or empirical, to believe that NS can do that. Of course, the fans of NS can try to show that it can do it. That would be a faslification of ID. As discussed recently with Bob O’H, that’s the reason why ID is absolutely falsifiable.
What is the clear rationale, supported by known facts, that say that an intelligent designer can't mimic evolution? Bob O'H
bill cole: A few more reflections. Joe Felsestein says: "Or has gpuccio restricted the 500-Bits-Rule somehow, such as requiring that all sequences outside of the target set have no function at all? " I have not restricted anything. A 500 bits function requires at least 500 specific bits to be implemented. So, the function is not there if those bits are not there. If someone can show that the function can be implemented with, say, 100 bits, then the functional complexity of the function is 100 bits, and not 500 bits. So, if I observe a function that requires 500 bits to be there, there is no gradual way of implementing it. As, in the thief example, there is no gradual way to find the key to the big safe by step by step attempts. The same is true ofr a new protein, unrelated to existing ones at the sequence level, and with w new function: if the transition to the new functional protein requires at least 500 new bits of functional information, it cannot be achieved by gradual increasingly functional steps. Like the key for the big safe. Of course, darwinists can imagine that some pathway (ladder) exists, maybe passing thorugh completely unrelated functions. But there is no reason to believe such a weird idea, and it has never been observed. IOWs, it is a myth, with no rationale and no empirical support. Joe Felsestein also says: "Or has he dodged the whole issue by only defining CFI to be present if we already know that natural selection cannot reach the set?" Not at all. We have no reasons at all, either rational or empirical, to believe that NS can do that. Of course, the fans of NS can try to show that it can do it. That would be a faslification of ID. As discussed recently with Bob O'H, that's the reason why ID is absolutely falsifiable. But there is no reason that we have to demonstrate (mathematically) that NS cannot do it. As already said, we don't need a mathematical falsification to ignore a myth which is not supported either by reason or by facts. "Arguing one case, as you do, does not address the issue of whether the 500-Bits Rule is valid in all cases." I am not "arguing one case". I am arguing that the 500-Bits Rule is valid in all known cases. I am afraid that Joe Felsestein is again confounded about the nature of empirical science: empirical science is not mathematics. In empirical science, an explanation is not interesting just because it has not been mathematically proven impossible. An explanation is interesting only if it has explanatory power, IOWs it is suggested by a clear rationale, and if it is supported by known facts. Neither thing is true for NS as an explanation of complex functional information. Both things are true for design as an explanation of complex functional information. gpuccio
bill cole: The answer is rather easy: The 500 bit rule is an empirical observation. The connection between functional complexity and design is an empirical observation. There is not one single known counter-example where 500 bits of new and original functional information can arise without any conscious design intervention. The explanation is simple: new and original complex functions can never be reached by step by step increases of function. Being an empirical observation, there is no need of any "mathematical proof". It just works in all known cases. The only case where functional complexity can increase without any new conscious intervention is a computationla system which has already been designed. For example, as I often say, a software that can compute the figures of pi will output, in time, increasingly complex outcomes (a greater number of figures of pi). But there is no increase of the functional information in the system, because the Kolmogorov complexity of the system remains the same. In that example: a) The functional specification has already been set: it is not "original". b) The increase of complexity in the outcome is computationally achieved, and the computation method is already embedded in the system (designed). The old procedure outlined in Dembski's expolanatory filter (excluding cases where the result can be achieved by a necessity mechanism operating in the system) is more than enough to eliminate those cases. But a software that has been designed to compute the figures of pi can only do what it has been designed to do. It cannot program a spreadsheet, or demonstarte a theorem, or anything else. IOWs, a new and original function cannot arise from an existing complex function, completely different in specification and implementation. There is not a mathemathical proof of that (at least, I cannot provide one). But it is empirically true. The idea that 500 bits of new and original complex information can be generated by a set by step ladder of increasingly functional states is simply a myth: something that has never been observed (fact), and never will (prediction). We don't need a mathemathical proof to ignore a myth: a myth is simply irrelevant in empirical science. By the way, has Joe Felsestein answered my argument about the thief? Has he shown how complex functional information can increase gradually in a genome? Or does he think that we need a mathemathical proof that my thief will never find the key to the big safe, and that he should rather stick to working on the many smaller safes? Just to know. (For those who have not followed the thief discussion, please look at my comment #65 here, and the quoted discussion in the Ubiquitin thread). gpuccio
gpuccio Here is Joe's response to me. Very interesting discussion.
Joe Felsenstein May 24, 2018 at 1:10 am colewd, You are arguing one case, in a Michael Behe style argument. But the issue I am raising is whether there is some mathematical proof that all cases where we can have a set of sequences that have functional information greater than 500 bits cannot be reached by natural selection acting on less-functional sequences that are outside the set. Is there a mathematical proof? Something like William Dembski’s Law of Conservation of Complex Specified Information? (Like his, but not the same — his does not do the job). Or has gpuccio restricted the 500-Bits-Rule somehow, such as requiring that all sequences outside of the target set have no function at all? Or has he dodged the whole issue by only defining CFI to be present if we already know that natural selection cannot reach the set? Arguing one case, as you do, does not address the issue of whether the 500-Bits Rule is valid in all cases.
bill cole
gpuccio Here is a piece from an op by Joe Felsenstein that I commented on. Any thoughts would be appreciated.
Joe Felsenstein, Joe We are asking here whether, in general, observation of more than 500 bits of functional information is “a reliable indicator of design”. And gpuccio’s definition of functional information is not confined to cases of islands of function, but also includes cases where there would be a path to along which function increases. In such cases, seeing 500 bits of functional information, we cannot conclude from this that it is extremely unlikely to have arisen by normal evolutionary processes. So the general rule that gpuccio gives fails, as it is not reliable. Bill In the cases that gpuccio supplied the proteins were part of a multi protein complex. They bind to other proteins and support the function or they don’t. Their sequence specificity is dependent on the proteins they bind with. If function here is either working or not working how would you argue there is any hill to climb?
bill cole
DATCG (377): Very interesting. Thanks. OLV
gpuccio (373): That’s fascinating indeed. Thanks. OLV
OLV, Re: lncRNA and Epigenetics, you might enjoy the PDF link at bottom to a presentation or collection of slides by Professor John Mattick. He's been ahead of the curve and out front on failures of Central Dogma and gene-centric views. This covers a bit of history and important progress. If interested, he reviews paper highlights for example on lncRNA and other epigenetic factors, RNA editing related to brain function, cancer, etc. As well as ALUs, a really interesting finding about half way down. Search on ALU in the presentation. Most papers are several years old, but his interpretations are interesting in highlights. Nothing to do with Central Dogma and you can see why the Title states assumptions in past were wrong. But what do we often read today, still from neo-Darwinist faithful? They're still relying on antiquated beliefs and assumptions that were/are wrong. Slides are from 2013, the title is golden: Most Assumptions in Molecular Biology are Wrong Clear cut and to the point. How refreshing! What Darwinist dismissed for so long as "Junk" DNA will become a more important driver of new medical treatments, especially individual Genomes/Epigenomes. Why? Because these "non-coded" regions are abundantly involved in regulatory functions and often cause disease if mutated. And our individual epigenomes differ in key areas of erroneously categorized "JUNK" DNA. What follows is a frank assessment in history of wrong assumptions:
The Central Dogma (Crick, 1958) refers to the flow of genetic information from DNA > RNA > protein. The assumption, based on studies of the lac operon in E. coli, has been that genes are synonymous with proteins and that most genetic information, including regulatory information, is transacted by proteins. This protein-centric view reflects a mechanical orientation and has led to several subsidiary assumptions, despite a number of subsequent surprises that should have given pause for thought.
Agreed, this is why many see a need for Extended Evolutionary Synthesis at least, or better, Modern Synthesis replacement! As Denis Noble has argued.
Surprise #1: Genes in humans and other complex eukaryotes are mosaics. Interpretation: Introns, despite the fact that they are transcribed, are ‘junk’.
Golden! I went looking for function in Introns in Gpuccio's Spliceosome OP on Alternative Splicing and found it. Surprise? So what - Darwinist answer? "JUNK" Ooof that hurts!
Surprise #2: Eukaryote genomes are full of transposon-derived sequences. Interpretation: These sequences are mainly non-functional ‘selfish’ DNA. (!)
Again, non-functional answer? Yes, and "selfish" oy!
Surprise #3: Gene number does not scale with developmental complexity. Interpretation: Combinatorial control of transcription, alternative splicing etc. can explain ….?
By ignoring non-coded regions as "JUNK" DNA, they missed the bigger picture of complexity and regulatory control systems.
The genetic basis of human development - Humans(and other vertebrates) have approximately the same number of protein-coding genes(~20,000) as C. Elegans - Most of the proteins have similar functions from nematodes to humans, and many are common with brewers yeast - Where is the information that programs our complexity?
That last one is a good question! Gee, what about "JUNK" DNA regions? On Slide 4(page 4).
- The biggest surprise of the genome projects was the discovery that the number of orthodox (protein-coding) genes does not scale strongly or consistently with complexity: The proportion of noncoding DNA broadly increases with developmental complexity (See Graphic Scale of Non-coded regions increasing up thru Vertebrates)
Hmmmm... seems like a good place to look for function! Also, check out Slide/Page 9 for intergenic regions of "gene deserts," then Table 2 page 11, Functionality of ncRNAs. He ends with the following future scenario..
Within a decade or two, individual genome sequences will be part of everyone’s medical record, and be integrated with other data in mobile electronic records that are both personal and part of larger databases that are used to inform health economics, insurance/underwritng, strategies for reducing disease burdens and costs, and deployment of resources.
The PDF Link: Most Assumptions in Molecular Biology are Wrong . DATCG
Local Minimum @372, Nice :) electrostatic came to mind for conformational forces involved in protein folding, but I didn't bother to pursue it. Which explains why I was always replacing my Starter in high school, My friend would often jump the starter, or was it the solenoid? Ah, solenoid. Talk about "Junk" - my old cars during high school! And yet, they still had function ;-) Usually the function was to deplete the holdings in my wallet! DATCG
LocalMinimum: "Sounds like a combination of a keyhole and a transport solenoid, using electrostatic tumblers instead of a magnetic field to carry the key and an attached package straight through the hole." Well, that's a metaphor! :) Thank you! gpuccio
DATCG: Extremely cool! :) The Nuclear Pore Complex is one of the wonders of the eukaryotic cells. But I was not aware of the “transport paradox”, and of the possible role of IDPs in explaining it. Great stuff. I will read it very carefully. :) gpuccio
OLV at #369: Very interesting paper. The study of RNA modfications is really still in its infancy, and I am sure that we will see great things in this field, in the next few years! :) gpuccio
DATCG @ 370: Sounds like a combination of a keyhole and a transport solenoid, using electrostatic tumblers instead of a magnetic field to carry the key and an attached package straight through the hole. LocalMinimum
OLV @369, Cool, more regulatory control! Ha! :) Say it ain't so. Frontiers is fun stuff. Thanks, I'll file it away for a look when I have time. I noticed they mentioned lncRNA. CircRNA is another interesting RNA in brain function as well associated with disease if mutated that most likely will be wrapped up in the Epitranscriptome. Along with many others. It's funny the authors quote Darwin at the end in attempts to tie it to neo-Darwinist story-telling. They based this not upon knowledge, but a "Darwin-of-the-Gaps" story telling. It could as easily be common design techniques. The Epigenome and regulatory code, and now the Epitranscriptome scream design. Regulatory control systems have a need to Know - up front - for decision processing. It's not a system designed for friendly mutation. This is so far from anything related to Darwinism or neo-darwinian blind, mutational events. There's a reason some Darwinist are so upset with ENCODE project and Epigenetics, and function within formerly "JUNK" declared regions. The more function, the more Editing, Splicing, multi-functional, overlapping regions there are, the less likely there's any room for fudge factors of random mutations being an innovator of novel forms. DATCG
Gpuccio, Apologize for going off topic on this comment, but hopefully a bit on Target ;-) You originally linked to a paper in your Ubiquitin OP; comment #10 on Design Principles of Protein "Disorder" facilitating specificity for substrates: Design Principles Involving Protein Disorder Facilitate Specific Substrate Selection and Degradation by the Ubiquitin-Proteasome System* Resulting discussions and additional research papers on IDR/IDPs convinced me they are a good example of Design features in the cell. IDPs allow a flexible folding solution that is conditional with specificity. Curious, I thought how else could these Design Principles of IDPs be utilized in cellular processing? Search results turned up the Nuclear Pore Complex(NPC). A controlled entry gateway using IDPs that verify cargo transport into and out of the nucleus. The NPC defends entry against viruses while allowing approved cargo to be transported to interior of nucleus... https://www.sciencedaily.com/releases/2018/03/180327132011.htm Summary:
Cells can avoid 'data breaches' when letting signaling proteins into their nuclei thanks to a quirky biophysical mechanism involving a blur of spaghetti-like proteins, researchers have shown.
The "quirky" mechanisms are Intrinsically Disordered(badly named) Proteins. Conditional and Flexible Folding Proteins. Continued...
In every human cell, all of the body's blueprints and instructions are stored in the form of DNA inside the nucleus. Molecules that need to travel in and out of the nucleus -- to turn genes on or off or retrieve information -- do so through passageways called nuclear pore complexes (NPCs). Traffic through these NPCs must be tightly controlled in order to prevent DNA hijacking by viruses or faulty functioning as in cancer. To travel through NPCs, many molecules must be attached to proteins called transport factors (TFs), which act as shuttles that the NPC recognizes. But the NPC faces a challenge: It must accurately recognize and bind to TFs to let them through without admitting unwanted traffic, but it must let them through quickly -- in a matter of milliseconds -- in order for the cell to be able to do its duties. Proteins known to accurately bind to specific molecules, like antibodies, normally stay stuck to their targets for periods of up to months. "How on Earth do you have the kind of specificity that we see in protein-protein interactions like antibodies, and yet have the kind of speed that we see with water off a Teflon pan?" asked Michael Rout, professor at Rockefeller University who was one of the co-lead authors of the work.
Voila - IDPs - Conditional and Flexible Folding Proteins. IDP - does not capture the functional aspect of these wonderfully designed proteins. continued...
They found that the key to this interaction being so specific, yet fleeting, was in many quick, transient contacts between transport factors and FG Nups. Similarly to the threads and hooks of Velcro, each amino acid pair of the FG Nup region only attached to the transport factor very weakly, with an overall result of affinity between the two partners; but unlike Velcro, the partners were not stuck together longer than necessary for the transport factor to travel through the nuclear pore. "I can't think of any analogy in normal life that does what this does," Rout said. "You've got this blur of (amino acids) coming on and off (the transport factor) with extraordinary speed."
Another analogy might be a Password/ID entry systems? Like an ID Card for employees or hotels that must gain entry with slide of a card. Flexibly programmable, quick, but specified and can be pre-programmed for different access levels to different departments or floors(cells), etc., keeping out intruders. If Flexible IDPs were not available, what would happen? Rigid Protein folding might shut the system down? Like inefficient hard-coding, rigid folds for dynamic interactions would be inefficient, a pain to maintain, hurting resources and inhibiting ease of modular and functional expansion. It appears to be a designed system for modular functionality. Flexible, yet specific. IDPs give greater efficiency in the Code. From a Design heuristic, makes sense. like in the NPC, or UPS and other functional processing systems for signal recognition. A one-to-many or many-to-one relationship is a requirement for fast, flexible approach. IDRs and IDPs meet that criteria. I think looking for Design in cellular systems pays off. The paper link... http://www.jbc.org/content/293/12/4555.full Abstract:
Intrinsically disordered proteins (IDPs) play important roles in many biological systems. Given the vast conformational space that IDPs can explore, the thermodynamics of the interactions with their partners is closely linked to their biological functions. Intrinsically disordered regions of Phe–Gly nucleoporins (FG Nups) that contain multiple phenylalanine–glycine repeats are of particular interest, as their interactions with transport factors (TFs) underlie the paradoxically rapid yet also highly selective transport of macromolecules mediated by the nuclear pore complex. Here, we used NMR and isothermal titration calorimetry to thermodynamically characterize these multivalent interactions. These analyses revealed that a combination of low per-FG motif affinity and the enthalpy–entropy balance prevents high-avidity interaction between FG Nups and TFs, whereas the large number of FG motifs promotes frequent FG–TF contacts, resulting in enhanced selectivity. Our thermodynamic model underlines the importance of functional disorder(flexibility) of FG Nups. It helps explain the rapid and selective translocation of TFs through the nuclear pore complex and further expands our understanding of the mechanisms of “fuzzy” interactions involving IDPs.
( ) emphasis mine Not so "fuzzy" per say from a Design perspective. Fuzzy due to current technology and the unknown. But it's clearly a highly flexible structure for a purpose. It must be able to identify quickly what is coming through the entry. NPC and IDPs - really cool research to find based upon Design Principles. DATCG
DATCG, Yes, modern biology shows many fascinating things that increasingly point in one direction, like this for example: Epitranscriptomics. OLV
Gpuccio, Thanks for your comments. Yep, "directed evolution" by an intelligent designer ;-) in a controlled setting. The writer may assume his readers understand, but good to point out.
A common trick is to imagine that, even if some function is clearly not selectable (see Szostak’s ATP binding), some special contexts (never really detailed) is some special points of evolutionary history could, in principle, make it selectable. That is, of course, false reasoning, ad hoc reasoning.
Yes, very common and used quite often as a bludgeon that Design Theorist simply cannot imagine..., or the false accusation of argument of incredulity. It's not that we cannot understand or think it to complex. It's that we do not accept Darwinian spinning of magical tales. This happens consistently. A fairy-tale of some imaginary event in the past we cannot confirm is spun. A Darwinian unicorn fills up the gap in their imaginative spinning. If we do not accept the fairy tale as factual or probable, we're accused of lacking imagination or the divine fallacy. Not true of course. A false accusation. Meanwhile, they go on spinning. You mentioned this similar scenario or type of rhetoric in the Ubiquitin OP as "Darwin-of-the-Gaps." And said you don't use it against Darwinist. I understand why. But I think it apropos to point out, looking back in Darwinian history of spin, what we see is imaginary tales filling Gaps of knowledge where we have little, if no actual understanding of the processes. But they can "imagine" evolving scenarios. OK, I love creativity in thought, especially in understanding complex problems and reverse engineering the known and speculation on the unknown. Imagination is critical in theory and science, but it's not a one-way materialist only road. "Darwin-of-the-Gaps" fairy tales cannot replace observable science or add imaginary tales as factual steps. The grand imagination of Darwinist often turns upon a grand illusion, not factual speculation, but in time leading to failure - majority of DNA dismissed as "JUNK" where we now find more function daily, 24/7 around the world. Central Dogma, etc., and on. The materialist, Darwinian worldview largely depends on the unknown. Not knowing what they don't know. And when surprises happen that favor Design, like more function in "JUNK" DNA, more spin ultimately follows, goalpost move.
To provide, of course, a completely useless protein in the end.
Yep, and I've seen you explain this over and over to Darwinist. But they "imagine" differently, not based on logic, but wishful thinking. Well, my imagination of Design Principles fills the void as well. In case of Intrinsically Disordered Regions and Proteins, I went looking for functional, flexible design architecture utilizing IDRs and IDPs. :) Will follow up later on what I found. DATCG
gpuccio(365): “ad hoc oracles invoked to justify a failed theory.” I like that very quotable phrase! OLV
DATCG at #364: Yes, you are right. They are saying the same things that I have said many times. The only point that they are probably missing, an important point indeed, is that Szostak's protein, the only one about which we find real information in the paper, is a result of directed evolution: it is different from the original protein in the random library. The only information we have about the original random protein is that it could be "selected" by an ATP column: IOWs, it must have some ATP binding, even really weak. The rest has been added by cycles of mutation and intelligent selection (again, by ATP columns). Therefore, the only result in Szostak's paper is that in a random library of 80 AA long sequences, we can find one or few sequences that can be selected for a specific, even if very weak, binding, and that the initial biochemical affinity can be potentiated by intelligent engineering, by rounds of mutation and selection. To provide, of course, a completely useless protein in the end. gpuccio
DATCG: Very interesting article. Yes, the comments about Szostak's ATP paper are similar to what I have said. And Tawfik's work about protein function and fitness is certainly very interesting. In general, what is missing in many of these analyses is a clear distinction between function and naturally selectable function. There seems to be a generic faith that if some function can be found in the lab, or even in vivo, it could be naturally selected in the wild. Of course, that is not true. Not at all. NS requires much more than generic function: it requires function that can effect reproduction, and can be fixed. The set of naturally selectable functions is, of course, a very tiny subset of the set of biologically detectable functions. A common trick is to imagine that, even if some function is clearly not selectable (see Szostak's ATP binding), some special contexts (never really detailed) is some special points of evolutionary history could, in principle, make it selectable. That is, of course, false reasoning, ad hoc reasoning. The examples of NS we usually observe are always situations of extreme environmental pressure, very specific, like massive antibiotic treatment. Random variations in the general environment that could suddenly promote potential non selectable functions really seem to be ad hoc oracles invoked to justify a failed theory. By the way, for Szostak's ATP binding even that trick seems impossible to imagine: who can hypothesize a credible biological scenario where a protein which can only subtract ATP from the environment becomes really useful? gpuccio
Mung, Gpuccio Double standards? Say it ain't so ;-) Gpuccio, here follows the review of Szostak's ATP experiments at the link I posted above. If you have time, I'm curious if you agree with it's general assessment and how it matches your thoughts on Szostak's results. update: formatting problems here with superscripts, will try to edit, but for clarification if needed, see the paper.
This issue has other faces, some of them numerical. Salisbury raised his objection based upon a calculation, however rough. Scientists knew that the space of possibilities was immense, but no one really knew how rare or common anything was. What was known was only that, generally speaking, good proteins are rare. The state of uncertainty has for the most part remained unchanged, but there are now interesting hints. Experimental results concerning the rarity of proteins range greatly. Differences depend upon select example systems, and some extrapolation is involved. Nonetheless, one of the most favorable and liberal estimates is by Jack Szostak: 1 in 10**11.42 He ascertained this figure by looking to see how random sequences—about eighty amino acids in length, long enough to fold—could cling to the biologically crucial molecule adenosine triphosphate, or ATP. At first glance, this is an improvement over Salisbury’s calculations by 489 powers of ten. But while an issue has been addressed, the problem has only been deferred. Despite persistent hopes to the contrary, and despite vague popularizations, it is completely unthinkable that the operations of a cell could be crammed into a single molecule. No one protein is superb enough to control its own synthesis, metabolism, protection, and translation. The best experiments will inevitably have an experimenter standing in for the missing operations; feeding the cells energized molecules; and providing a controlled environment of salts, pH, metals, and ions; these procedures are governed by expression and sequencing techniques and perfect selection. Only with this in place is it observed that an emergent protein can tether or bind a molecule. A cell of hundreds of proteins has been profoundly assumed. If a primordial protein bound ATP, or some other valuable small molecule, so what? It could not reproduce a sequence; it could not act reflexively on its own sequence to control heredity; it could not protect itself; it could not energize itself. It would be destined to vanish, either by decaying or succumbing to various unhelpful linkages or bond-breaking molecules. Life needs more than one molecule. Salisbury’s probabilities now accumulate with force. If a pathway is coherent, not just any binding sequence will do. Particular reactions, in a particular sequence, must be localized and coordinated. Szostak’s experiment had not measured the probability of a particular sequence arising. He had measured the frequency, or probability, with which a particular function arises. The probabilities multiply if a life-necessary pathway, say nucleotide synthesis, requires several steps. If five enzyme functions were needed (ten are needed in modern adenine synthesis),43 then the probability would be 1 in (10**11)**5, or 1 in 10**55. If all the operations needed for a small autonomous biology were ten functions—this is before evolution can even start to help—the probability is 1 in (10**11)**10, or 1 in 10**110. This is more than the number of seconds since the Big Bang, more protons than there are in the universe. In considering a similar figure derived in a different context, Tawfik concedes that if true, this would make “the emergence of sequences with function a highly improbable event, despite considerable redundancy (many sequences giving the same structure and function).”44 In other words, these odds are impossible. And it gets worse. The figure of 1 in 10**11 is observed in studies of protein molecules that can cling to ATP. But ATP production itself is not a spontaneous act; it is controlled by many enzymes, a notable one being phosphofructokinase. This enzyme has the ability to bind ATP and use it to catalyze a specific transfer reaction, while simultaneously being sensitive to environmental cues for its own regulation. Szostak procured molecules to bind ATP, but without coordinating that binding (and the energy it affords) to something useful. No ability for regulation—switching on or off at crucial times—was reported. It would be much harder, more rare and improbable, to get a molecule that could not only bind the biologically necessary ATP, but use it effectively or act to produce it. Even though Szostak’s molecules can bind ATP, they only do so unreliably. It is questionable whether prebiotic molecules like these could be physiologically relevant. After eight rounds of selection, evolved specimens were sampled and then cloned. Only 10 percent of identical sequences worked. The vast majority of clones could not, most likely because their conformations were too destabilized and unwieldy. They could not coil effectively. One sequence could not be relied upon to fold. The pattern improved but was nonetheless preserved, even to the experimental end. The most robust specimens were seen after eighteen rounds of evolution; even then, at their very best, 60 percent of the copies did not work. No living cell could tolerate this failure rate, especially early cells with only primitive repair and control systems; reactions would be gummed up and physiology crippled. Tawfik soberly recognizes the problem. The appearance of early protein families, he has remarked, is “something like close to a miracle.”45
**edited math formatting superscripts DATCG
Mung: "Perhaps then you have some thoughts about why people who are critical of your reasoning and definitions are not also critical of Hazen/Griffin/Carothers/Szostak." That's a good question, as usual. I am thinking of DNA_Jock, for example, who has passionately defended Szostak's ATP paper, if I remember well, and is at the same time a fierce opponent of functional information. Who knows what he thinks of Szostak's statement: "Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree." (emphasis mine) given his relentless opposition to any attempt to fix a degree for the observed function. Maybe the TSS fallacy never applies to Szostak, while it applies by default to me! :) gpuccio
gpuccio @ 344:
I think that the paper about functional complexity that you refer to is in good accord with my reasonings and definitions.
Perhaps then you have some thoughts about why people who are critical of your reasoning and definitions are not also critical of Hazen/Griffin/Carothers/Szostak. It seems to me to be a bit of an incongruity. :) Mung
Gpuccio, You might enjoy the following article... Dan S. Tawfik Group - The New View of Proteins by Tyler Hampton http://inference-review.com/article/the-new-view-of-proteins It reviews as part of it's discussion on evolution, Szostak's experiment with ATP and weak binding - largely in agreement with you it appears. And other Lab experiments, mainly Tawfik on the edge of protein evolution, and function(s). Many good nuggets, including a quick mention of IDRs(those lovely flexible creatures). But starts with a review from Salisbury to John Maynard Smith, and Ohno to current thinking by Dan Tawfik's lab. Very interesting review, history and updates on current thinking, experiments, and limits of macro evolution. DATCG
OLV: I don't keep any special track. I usually look for the papers on Pubmed, and download them, with some reference title that can help me retrieve them if necessary. gpuccio
gpuccio, I've noticed that you and some of your frequent commenters have posted links to many serious scientific papers. How do you all keep track of so many papers? How do you know if they have been posted before in the same OP or in another OP within this website? Is there an easy way to do that? Thanks. OLV
gpuccio, Thank you for clarifying that point. OLV
OLV: Mutation and variation can probably be used as synonimous. I prefer variation, usually, because it gives more the idea that any genetic variation is included. Sometimes people tend to associate "mutation" with single nucleotide variations, and there are many people who revel in the idea that there are so many forms of variation, and in their mind that would be an element in favour of the neo-darwinist mechanism, some sort of "added creativity". But, of course, that is not true. In my OP about RV, I have considered the total number of different genetic states that can be reached, and that includes any form of genetic variation available. Once the genomic information changes, it is not really important how it changed, if only one nucleotide was substituted, or if an indel changed the meaning of a whole sequence: probabilistically, we have reached a new state, in all cases. A new state is a new trait. It appears in one individual, and then the laws of population genetics will, in some way, govern its fate. gpuccio
Origenes at #354: Good work. Thank you for trying to simplify my arguments! :) To be precise, in the first summary it should be: "at least (about) 500 bits" because the 90th percentile of the jump, in my database, is 486 bits. In the second summary, I would add the word "naturally": "each of them naturally selectable." to avoid confusion. The most important things, usually, are very simple. :) gpuccio
Origenes (354): I like your idea. Thanks.
The Limits of Random Mutation
Perhaps gpuccio can explain again if there's a difference between RM and RV or if both terms can be used interchangeably? OLV
GPuccio, I have attempted to summarize two of your OP’s. Perhaps the topics are too nuanced and too complex to compress them like I did, but my idea is to list your main arguments in a ‘on the back of an envelope style’ ready to use in a discussion. Would you care to take a look to see if I have left essential things out? - - - The Limits of Random Mutation: The highest probabilistic resources are found in bacteria, due to the huge population size and high reproduction rate. These probabilistic resources, with a hugely optimistic estimate, are still under 140 bits. This means that any sequence with 160 bits of functional information is, by far, beyond any reasonable probability of being the result of RV in the system of all bacteria in 4 billion years of natural history, even with the most optimistic assumptions. 10% of all human proteins (about 2000) each have an information jump from pre-vertebrates to vertebrates of at least 500 bits. — source. The Limits of Natural Selection: NS can sometimes tweak an already existing function and many papers show just that. But there is absolutely no reason to believe that a “simple” variation can generate “new complex functional systems”. There is no example of that in any complex system. Can the change of a letter generate a new novel? In order for NS to work, new complex functions must be deconstructable into simple steps, each of them selectable. This is pure imagination since it is not supported by any facts. The rugged landscape paper supports the idea that these simple steps do not exist. Complex functions are isolated in a huge search space. When the search space is really huge, the number of complex solutions is empirically irrelevant to the design inference. — source. Origenes
gpuccio, That’s a valid understandable explanation. Thanks. BTW, the section “c3) The laughs” in the OP seems very intelligently designed. It has clear meaning and purpose, As the rest of the article, it is well written and displays a lot of complex functional specified information. Textbook material. OLV
OLV: I think Szostak is not a fool, and he is probably seriously interested in the issue of functional information. The 2001 paper is an example of some good experimental work smartly presented in a misleading context. There are at least two important "errors" in the conceptual frame of the paper: a) Presenting a definite example of engineering by Intelligent Selection as a possible model for NS. This is a very common trick, and it is "implemented" indirectly and smartly enough in the paper. b) Indirectly implying that the engineered protein and the original sequence in the random library are essentially the same thing, or comparable things, and that both are exanples of "functional sequences", blatantly ignoring the added information deriving from the engineering process. This is particularly strange, from one who seems sincerely interested in functional information and its measure. However, I must say that later papers by Szostak are more explicit in identifying the object of his experiments as protein engineering. So, while the 2001 paper is smart and misleading, maybe the guy is up to something good after all. Most of the damage made by the 2001 paper is due to the silly and ignorant interpretations of its results by the darwinist crowd (including intelligent people like DNA_Jock, see the "laughs" section in the OP). But, of course, the misleading nature of the paper contributed to those misinterpretations. gpuccio
gpuccio, Any suggestion for explaining such a difference? Is it the topic? Is it the journal Nature vs. PNAS? Is it the year 2001 vs. 2007? Something else? Thanks. Article information Nature. Author manuscript; available in PMC 2015 Jun 22. Published in final edited form as: Nature. 2001 Apr 5; 410(6829): 715–718. doi: 10.1038/35070613 PMCID: PMC4476321 HHMIMSID: HHMIMS699447 PMID: 11287961 Anthony D Keefe and Jack W. Szostak ARTICLE INFORMATION Proc Natl Acad Sci U S A. 2007 May 15; 104(Suppl 1): 8574–8581. doi: 10.1073/pnas.0701744104 PMCID: PMC1876432 PMID: 17494745 Robert M. Hazen,*† Patrick L. Griffin,* James M. Carothers,‡ and Jack W. Szostak§ OLV
gpuccio, That’s an interesting observation indeed. Thanks. OLV
OLV: Thank you! 23 citations. Interesting. Just compare that to the 106 citations of the 2001 Szostak paper about the ATP binding protein, that I have criticized so many times here: Functional proteins from a random-sequence library https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4476321/ Two reflections: 1) The same person can do very good and very bad things. 2) Very bad things are much more popular than very good things. gpuccio
(347): Other related paper citations: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3565706/citedby/ OLV
gpuccio (344): Citations of the paper you referenced: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1876432/citedby/ OLV
You read too much into my mention of Paley... and you put words into his mouth - I don't read any of this in his book:
"The law, says Paley, prescribes how the agent must proceed. The law sets the boundaries — what is possible and what is not — for an intelligent designer."
Doesn't matter that much, but too bad we can't ask Paley if he agrees with me or with your statement. Would you say the Creator obeys some impersonal laws? Whose laws, Darwin's? And I'm the one " using the word “law” in a completely personal way"? Not those like you that make an idol out of "the laws of nature"? Nonlin.org
Nonlin: I am not Paley, nor am I simply repeating what he said.
Irrelevant. You have claimed that Paley was “right” about laws and supported your idiosyncratic idea that a designer is a “lawgiver.” As I have pointed out in #334 you are mistaken about that.
Nonlin: Paley makes a very pertinent comparison with human laws …
Sure, but Paley does not put the ‘tax law’ in the same category as the ‘laws of thermodynamics’, that’s your thing. Origenes
Mung: I think that the paper about functional complexity that you refer to is in good accord with my reasonings and definitions. I will read it again in detail, but for the moment I give here the link and the abstract: Functional information and the emergence of biocomplexity https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1876432/
ABSTRACT: Complex emergent systems of many interacting components, including complex biological systems, have the potential to perform quantifiable functions. Accordingly, we define “functional information,” I(Ex), as a measure of system complexity. For a given system and function, x (e.g., a folded RNA sequence that binds to GTP), and degree of function, Ex (e.g., the RNA–GTP binding energy), I(Ex) = -log2[F(Ex)], where F(Ex) is the fraction of all possible configurations of the system that possess a degree of function >= Ex. Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree. In each case we observe evidence for several distinct solutions with different maximum degrees of function, features that lead to steps in plots of information versus degree of function.
Please, not the definition of functional information as: "the fraction of all possible configurations of the system that possess a degree of function >= Ex." which is identical to my definition, in particular my definition of functional information as the upper tail of the observed function, that was so much criticized by DNA_Jock. Also, note the reference to: "several distinct solutions with different maximum degrees of function" which, again, is perfectly in line with my reasonings in this OP. However, I will read again the paper with attention, and if you want we can discuss it here. gpuccio
hi gpuccio, Have you ever done an OP on the Hazen/Griffin/Carothers/Szostak concept of functional information? Mung
Origenes@334 I am not Paley, nor am I simply repeating what he said. Paley makes a very pertinent comparison with human laws, and yes, the police enforces what the legislative rules but in the end they're both part of the state. "The agent" has "the power". This also works in engineering - we design and prototype gizmos that behave according to our plan. The corporation is the powerful agent - there's no need to refer to each of its various employees. And if you happen to be a local power, you must first take into account the laws of the state (and maybe EU, NATO, etc.) over which you have no control. Just like in engineering where our designs cannot bypass God's laws, which we don't know except from observations, again mirroring what happens in real life where the written laws are mostly useless and in fact we have to observe how the administration chooses to apply these laws. Nonlin.org
To all: This seems very interesting, and very pertinent: Predictive hypotheses are ineffectual in resolving complex biochemical systems.
Abstract: Scientific hypotheses may either predict particular unknown facts or accommodate previously-known data. Although affirmed predictions are intuitively more rewarding than accommodations of established facts, opinions divide whether predictive hypotheses are also epistemically superior to accommodation hypotheses. This paper examines the contribution of predictive hypotheses to discoveries of several bio-molecular systems. Having all the necessary elements of the system known beforehand, an abstract predictive hypothesis of semiconservative mode of DNA replication was successfully affirmed. However, in defining the genetic code whose biochemical basis was unclear, hypotheses were only partially effective and supplementary experimentation was required for its conclusive definition. Markedly, hypotheses were entirely inept in predicting workings of complex systems that included unknown elements. Thus, hypotheses did not predict the existence and function of mRNA, the multiple unidentified components of the protein biosynthesis machinery, or the manifold unknown constituents of the ubiquitin-proteasome system of protein breakdown. Consequently, because of their inability to envision unknown entities, predictive hypotheses did not contribute to the elucidation of of complex systems. As data-based accommodation theories remained the sole instrument to explain complex bio-molecular systems, the philosophical question of alleged advantage of predictive over accommodative hypotheses became inconsequential.
IOWs, if a system is really complex, we are usually not able to predict how it works: the best way remains to look at it, and to be guided by its complexity in our understanding. Does that say something about the TSS fallacy fallacy? And the real nature of science? I think it does. :) I specially like this: "Markedly, hypotheses were entirely inept in predicting workings of complex systems that included unknown elements." IOWs, new and original functional complexity! :) gpuccio
LocalMinimum: Thank you! :) gpuccio
DATCG at #326: Thank you for the kind words, and for mentioning in detail some of the basic ideas clearly expressed by Abel and Trevors. I find their basic concepts really helpful. For example, the concept of configurable switches, and the clear distinction between descriptive information and prescriptive information have helped me a lot, and I use those concepts very often in my discussions. You say: "We are Coded Beings, not crystals, not snowflakes." That sums it up nicely! :) gpuccio
Nonlin.org: Indeed, I don't want to persuade anyone. I just express my ideas. I believe that ID theory is true, and I try to explain why. Biological ID is about biological issues, so I am afraid that some specifical biological understanding is required. And I don't want to discriminate Paley: he is certainly a great guy! My point was only that his language and approach are those of a philosopher writing more than 200 years ago, and therefore they must be understood in that context. But I do believe that his metaphor about the watch remains a precious idea. gpuccio
Gpuccio@332 Ok, so now I understand your argument much better than before. We are on the same side, so I see no good "evolutionary" counterarguments. However, if I were neutral I would say that something seems to be missing and that your ideas are way too convoluted to be persuasive. And that is a big problem for a lot of the ID books out there. Are you publishing anywhere else? Writing a book? Because if you do, you need to do a much, much better job summarizing your argument and your defense against the Darwinist attacks. And you also need to write for the common person, not just for people that spent all their life in the biology lab. I hope this helps. Regarding your new comment @336, If you discriminate against Paley, why listen to Newton, Leibniz or Pythagoras for that matter? Newton has been overruled already in some areas. Not Paley (not yet)! Nonlin.org
Origenes: Frankly, I am not interested in an exegesis of Paley's writing. He had a great intuition, but he is always a philosopher of the eighteen century, and his language is consistent with that. Our friend Nonlin.org seems happy to consider design as some form of law. After all, as you say, he is probably the only one who believes that way. I don't think he is damaging anyone by believing what he believes. And after all, he has faced a fair discussion here, and we must commend him for his honesty, if not for his clarity of thought! :) gpuccio
Nonlin.org: "Looks like this is the end of this road as we’ll not reach a common understanding. That’s OK. At least our positions are clear." That's fine with me! :) gpuccio
Nonlin: “Everyone” is misusing the word “law”. But W. Paley got the right idea in AD 1800 ...
Nonlin misunderstands what Paley is saying. He erroneously believes that Paley says that an agent is a "lawmaker."
Nonline: “What you miss is that I create the laws of that gizmo. I am the lawmaker. My design are those laws, not some “configuration”.
But that is not at all what Paley is saying. Let's have a look:
Paley: And not less surprised to be informed, that the watch in his hand was nothing more than the result of the laws of metallic nature. It is a perversion of language to assign any law, as the efficient, operative cause of any thing. A law presupposes an agent; for it is only the mode, according to which an agent proceeds: it implies a power; for it is the order, according to which that power acts. Without this agent, without this power, which are both distinct from itself, the law does nothing; is nothing.
The law, says Paley, prescribes how the agent must proceed. The law sets the boundaries — what is possible and what is not — for an intelligent designer. The laws bring "order", like chess rules set boundaries for the chess player. Paley also claims that laws are inert on their own. Only when an agent wields his power do laws spring into action. But nowhere does Paley say that an agent makes laws or that "design is laws." That is all in Nonlin's imagination. Origenes
Gpuccio@331 Fair or unfair, a coin becomes part of the design when you (the intelligent agent) start using it (you probably disagree and that’s OK). You: “Here you are using the word “law” in a completely personal way, which does not correspond to what everyone means by a law of nature.” Yes! That’s the point and our standing disagreement! “Everyone” is misusing the word “law”. But W. Paley got the right idea in AD 1800: “It is a perversion of language to assign any law as the efficient, operative cause of anything. A law presupposes an agent; for it is only the mode according to which an agent proceeds; it implies a power; for it is the order, according to which that power acts. Without this agent, without this power, which are both distinct from itself, the law does nothing; is nothing.” I also say there’s no such thing as “universal natural laws”. And this is of course part of our disagreement. Looks like this is the end of this road as we’ll not reach a common understanding. That’s OK. At least our positions are clear. Nonlin.org
Nonlin.org at #329:
1.Isn’t your formula random? It could be –ln or –log10 or even straight Target space/Search space
-log2 is the formula which is used in information theory, including Shannon. It is useful, because ot gives you the results in bits, the common unit for information. The target space/search space ratio is a probability. To transform it into a measure of information, you have to use that formule. So you have positive bits instead of negative exponents. It's only a question of mathematical usefulness. Randomness has nothing to do with it.
2.The stone example doesn’t work for many reasons. Can you select and go through a real world accessible biology example instead?
What about the alpha and beta chains of ATP synthase?
3.How do you know target space? What is a “good stone”?
Target space is simply the sum of all objects in a system that can implement the function as observed and defined. Measuring it is the most difficult part, and is not always possible. For complex functions and big search spaces, the target space cannot be measured directly, becasue of the combinatorial barriers. But there are indirect methods to approximate it. In the case of functional proteins, I use, as explained many times, the indirect measure derived from sequence conservation through very long evolutionary times. Again, look at my arguments about the alpha and beta chains of ATP synthase. You can find a real evaluation of search space and target space in the case of English language in this OP of mine: An attempt at computing dFSCI for English language https://uncommondescent.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/
4.How do you know “search space”? Is it “this area”? The whole continent? The world? The universe?
Not at all. The search space is the set of all possible objects in the system that could reasonably be generated in the system, and that includes those which can implement the observed function (the target space). In the case of a functional protein, the best choice procedurally is to define the search space as the set of all possible AA sequences of that length: it is of course an approximation, but a very reasonable one.
5.For “complex” you say 30 bits and 500 bits somewhere else. But why? And both seem arbitrary. And what does that mean “30 bits”? Is it “complex if –log2(Target space/Search) > 2^30 (=1 billion)”?
Where have I said 30 bits? I don't understand. I always say that we must give a threshold which is appropriate for the system we are describing. The purpose of the threshod is to make the observed result really unlikely even after considering the probabilistic resources of the system. 500 bits is a good threshold in the general case, because it is big enough ot make any result highly unlikely, even considering all the probabilistic resources of the known universe (Dembski's Universal Probability Bound). For biological object on our planet, a lower threshold is more than enough. See my table at the beginning of my OP: What are the limits of Random Variation? A simple evaluation of the probabilistic resources of our biological world https://uncommondescent.com/intelligent-design/what-are-the-limits-of-random-variation-a-simple-evaluation-of-the-probabilistic-resources-of-our-biological-world/ where I compute the probabilistic resources of the whole bacterial system on our planet as amounting (very, very generously) at 138.6 bits, at most. 30 bits is definitely too low to make a safe design inference for planetary scenarios. I do belive that 30 bits functions are probably designed in all cases, because the empirical observed threshold is probably somehwere around 4 - 5 AAs (about 20 bits). But again, I would not make a case for such simpler situations. We have, definitley, a lot of examples of hundreds and thousands of bits even if we only consider proteins.
6.What if “the function” can be accomplished without stones? What if it can be done with “bones” or “twigs” instead? What if “the function” can be broken into “simple functions”?
What if is not a good way of making science. We must reason about observed facts, and specific systems. In biology, many functions are implemented by proteins, and only by proteins. Therefore, we observe functional proteins, and the protein search space. That's empirical science. Maybe we could build some ATP synthase using Lego bricks, who knows, but I would not spend my nights reasoning about about those possibilities. The question: "What if “the function” can be broken into “simple functions”?" is, of course, more interesting. If that is true, we have a ladder of functions. To be useful in biology, it must be a ladder of naturally selectable functions. But, of course, that is not true. A complex function is complex because it is not the simple sum of simpler functions. Of course there can be modules in complex functions, but the idea is that afunction is complex if it requires more than 500 specific bits that did not exist before to appear. It is not 9important if, beyond those 500 specific and new and original bits, it also uses old modules that already existed. So, a petrol car certainly uses wheels, like a cart, but the petrol engine was not present in the cart: it is a new, original function. gpuccio
Nonlin.org at #328: 1) I don't want to discuss Paley's language, of course. It does not correspond to what we use today, even if the ideas remain the same. An unfair coin can be designed or not. It can be unfair by chance, a production defect, or because someone uses it to win (design). We cannot distinguish the two conditions, because an unfair coin is a rather simple object. Therefore, both design systems and non design system can produce it (of course, a coin in itself is more complex: but, given coins, it is not so difficult that some of them can be unfair). My point was not to infer design for the coin, but ofr a sequence of all heads. It is an oredered sequence, but is it was produced with an unfair coin, no specific design intervention was necessary to generate the order. The laws of nature can generate some order in many cases, but they can never generate a contingent configuration with high functional specificity. 2) Cell division is certainly not a law of nature. It is a complex process, made possible by an extremely highly complex configuration of the structures implied. You must be very careful not to do such huge errors of category. 3) You say: "What you miss is that I create the laws of that gizmo. I am the lawmaker. My design are those laws, not some “configuration”. The configuration is just the way I get my gizmo to implement my laws. And if you change my laws, so be it, but you can only do that because you are a designer too. You’re a lawmaker too." No. Here you are using the word "law" in a completely personal way, which does not correspond to what everyone means by a law of nature. You can use it that way, if you like, but you cannot conflate the two meanings. If I design a machine, the cpecific contingent configuration of the machine can implement my desired function, and of course the machine works according to the laws of nature, and according to the configuration of the machine itself, which uses those natural laws to achieve a specific result. If you want to call the function, or the functional configuration, a law, you are only playing with words. Of course the designer establishes the form of the machine, and how it will work, but that is not a law at all. So, I say that a designer is a design-maker, not a law-maker. You language is confusiong. You go on using the word "law-maker" for "designer. Again, you are just palying with words. The point is, a designer works with special configurations that use the universal natural laws in a specific way to attain desired results. The point is in the configuration, both in the case of machines and in the case of art. The "regularities" you speak of for a tablet, ot any other human artifact, are not regularities that could emerge by natural laws. Tablets do not emerge by natural laws, even if we do not consider their computing functions. Neither do spoons or forks. Of course a designer implements specific forms (configurations) from his presonal conscious representations to objects. Sum are simpler and more "regular", other are more complex and contingent, but in all cases a configuration that would never arise spontabeously by law is intenitonally generated by the designer. Even the things that you call "regularities" in designed tools are simply "configurable switches", and not the order that derives from natural laws. I stick to digital information, rather than to analogic cases, because it's much easier to compute the functional information, and because most biological scenarios are about digital information. But the general concept is the same in all cases. gpuccio
...and of course 2^30 = 1 billion not 1 million. Nonlin.org
gpuccio@322 Here are a few more questions regarding your “complex functional information”: 1. Isn’t your formula random? It could be –ln or –log10 or even straight Target space/Search space 2. The stone example doesn’t work for many reasons. Can you select and go through a real world accessible biology example instead? 3. How do you know target space? What is a “good stone”? 4. How do you know “search space”? Is it “this area”? The whole continent? The world? The universe? 5. For “complex” you say 30 bits and 500 bits somewhere else. But why? And both seem arbitrary. And what does that mean “30 bits”? Is it “complex if –log2(Target space/Search) > 2^30 (=1 million)”? 6. What if “the function” can be accomplished without stones? What if it can be done with “bones” or “twigs” instead? What if “the function” can be broken into “simple functions”? 7. Any other questions that you had to answer? Nonlin.org
gpuccio@322 I have yet to formalize my ideas in a coherent essay - it is on my “to do” list and this discussion with you is helping a lot. Thanks! If you don’t mind, here are a few more clarifying questions/comments: You say: “we must be extremely careful that order is not simply the result of law (like in the case of an unfair coin which gives a series of heads). Function, instead, when implemented by a specific contingent configuration, has no such limitations.” I just got a hold of Paley’s book and what do you know, right on page 8 he cautions us against assuming this and that law as given: https://babel.hathitrust.org/cgi/pt?id=mdp.39015005472033;view=1up;seq=20;size=75 (and this is as far as I got). And is the unfair coin you mention not a perfect example of a design indistinguishable from law? Because someone created that unfair coin, right? And don’t they say your function (say cell division) is really just a law of nature? On “determinism” I just read the internet definition and search hits – not a big deal to me but definitely confusing. You say: “The design, again, is certainly based on understanding of laws, and operates using laws: the light turn on powered by solar energy because you arranged things for that to happen. It’s the configuration that counts, and the configuration is there because you designed it. Gizmos don’t go in orbit with solar cells and all the rest because some law makes that happen spontaneously. Moreover, I could reach your gizmo and change the design in it. I could arrange things so that the light goes on only when the moon is visible. And the gizmo would go on that other way, after my explicit design intervention.” What you miss is that I create the laws of that gizmo. I am the lawmaker. My design are those laws, not some “configuration”. The configuration is just the way I get my gizmo to implement my laws. And if you change my laws, so be it, but you can only do that because you are a designer too. You’re a lawmaker too. Yes, I am conflating design with laws because that’s what design is – lawmaking! And you can also see this in functionless art that nonetheless clearly shows me to be Rembrandt the lawmaker, not Leonardo the lawmaker. And if you are Paley in 1800 and find my Samsung tablet, you can clearly see it is designed without observing any function other than paperweight (because you don’t have electricity and know nothing about modern technology). This is especially relevant to biology where we still can’t identify many functions. By my method, he will know the tablet was designed even though by yours he won’t (“false negative”). And how do we search for extraterrestrial life? Function or no function, when we’ll see the object’s regularities (its laws) we’ll know it was designed. We’ll see how my arguments fare when I publish. I am not necessarily disputing the “ID theory” - just looking for something more convincing and simpler. Darwinistas invoking NS is simply retard as the whole idea of NS is bogus: http://nonlin.org/natural-selection/ Nonlin.org
DATCG:
Gpuccio, Your patience is admirable.
Seconded. LocalMinimum
Gpuccio, Your patience is admirable. For readers that happen by and see this post and others, they will get a very clear picture of what is wrong with Determinism and Laws as an only answer for life, as well as the lack of reasoning and assumptions for unguided, blind events being the reason for life. Simply not true on multiple levels of Code. The problem is to many equate Simplified Order with Function. But Function relies on Specified Organization and semiotic language - Code - involved with arbitrary assignments of variables called by functional systems that interact and communicate with each other. You cannot have Error Correction based on Law alone. The Rules, instructions and interpretations are not made by law alone. This is a semiotic system. Gpuccio, I'll go back to Three Subsets of Information. Hope that's OK to add here after so much work on your behalf. And for others, so readers can see another well written explanation of these concepts for Functional Sequence Complexity and Prescriptive Information. Not merely law and Order sequencing, but by Organization and Design. So readers can understand the differences and limitations of Random and Ordered Sequences to produce life... Three Subsets of sequence complexity and their relevance to biopolymeric Information https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1208958/ David Abel and Jack Trevors 2005
In life-origin science, attention usually focuses on a theorized pre-RNA World [52-55]. RNA chemistry is extremely challenging in a prebiotic context. Ribonucleotides are difficult to activate (charge). And even oligoribonucleotides are extremely hard to form, especially without templating. The maximum length of such single strands in solution is usually only eight to ten monomers (mers). As a result, many investigators suspect that some chemical RNA analog must have existed [56,57]. For our purposes here of discussing linear sequence complexity, let us assume adequate availability of all four ribonucleotides in a pre-RNA prebiotic molecular evolutionary environment. Any one of the four ribonucleotides could be polymerized next in solution onto a forming single-stranded polyribonucleotide. Let us also ignore in our model for the moment that the maximum achievable length of aqueous polyribonucleotides seems to be no more than eight to ten monomers (mers). Physicochemical dynamics do not determine the particular sequencing of these single-stranded, untemplated polymers of RNA. The selection of the initial "sense" sequence is largely free of natural law influences and constraints. Sequencing is dynamically inert[58]. Even when activated analogs of ribonucleotide monomers are used in eutectic ice, incorporation of both purine and pyrimidine bases proceed at comparable rates and yields [59]. Monnard's paper provides additional evidence that the sequencing of untemplated single-stranded RNA polymerization in solution is dynamically inert – that the sequencing is not determined or ordered by physicochemical forces. Sequencing would be statistically unweighted given a highly theoretical "soup" environment characterized by 1) equal availability of all four bases, and 2) the absence of complementary base-pairing and templating (e.g., adsorption onto montmorillonite). Initial sequencing of single-stranded RNA-like analogs is crucial to most life-origin models. Particular sequencing leads not only to a theorized self- or mutually-replicative primary structure, but to catalytic capability of that same or very closely-related sequence. One of the biggest problems for the pre-RNA World model is finding sequences that can simultaneously self-replicate and catalyze needed metabolic functions. For even the simplest protometabolic function to arise, large numbers of such self-replicative and metabolically contributive oligoribonucleotides would have to arise at the same place at the same time. Little empirical evidence exists to contradict the contention that untemplated sequencing is dynamically inert (physically arbitrary). We are accustomed to thinking in terms of base-pairing complementarity determining sequencing. It is only in researching the pre-RNA world that the problem of single-stranded metabolically functional sequencing of ribonucleotides (or their analogs) becomes acute. And of course highly-ordered templated sequencing of RNA strands on natural surfaces such as clay offers no explanation for biofunctional sequencing. The question is never answered, "From what source did the template derive its functional information?" In fact, no empirical evidence has been presented of a naturally occurring inorganic template that contains anything more than combinatorial uncertainty. No bridge has been established between combinatorial uncertainty and utility of any kind. It is difficult to polymerize even activated ribonucleotides without templating. Eight to ten mers is still the maximum oligoribonucleotide length achievable in solution. When we appeal to templating as a means of determining sequencing, such as adsorption onto montmorillonite, physicochemical determinism yields highly ordered sequencing (e.g., polyadenines)[60]. Such highly-ordered, low-uncertainty sequences retain almost no prescriptive information. Empirical and rational evidence is lacking of physics or chemistry determining semantic/semiotic/biomessenger functional sequencing. Increased frequencies of certain ribonucleotides, CG for example, are seen in post-textual reference sequences. This is like citing an increased frequency of "qu" in post-textual English language. The only reason "q" and "u" have a higher frequency of association in English is because of arbitrarily chosen rules, not laws, of the English language. Apart from linguistic rules, all twenty-six English letters are equally available for selection at any sequential decision node. But we are attempting to model a purely pre-textual, combinatorial, chemical-dynamic theoretical primordial soup. No evidence exists that such a soup ever existed. But assuming that all four ribonucleotides might have been equally available in such a soup, no such "qu" type rule-based linkages would have occurred chemically between ribonucleotides. They are freely resortable apart from templating and complementary binding. Weighted means of each base polymerization would not have deviated far from p = 0.25. When we introduce ribonucleotide availability realities into our soup model, we would not expect hardly any cytosine to be incorporated into the early genetic code. Cytosine is extremely difficult even for highly skilled chemists to generate [61,62]. If an extreme paucity of cytosine existed in a primordial environment, uncertainty would have been greatly reduced. Heavily weighted means of relative occurrence of the other three bases would have existed. The potential for recordation of prescriptive information would have been reduced by the resulting low uncertainty of base "selection." All aspects of life manifest extraordinarily high quantities of prescriptive information. Any self-ordering (law-like behavior) or weighted-mean tendencies (reduced availability of certain bases) would have limited information retention. If non-templated dynamic chemistry predisposes higher frequencies of certain bases, how did so many highly-informational genes get coded? Any programming effort would have had to fight against a highly prejudicial self-ordering dynamic redundancy. There would have been little or no uncertainty (bits) at each locus. Information potential would have been severely constrained. Genetic sequence complexity is unique in nature "Complexity," even "sequence complexity," is an inadequate term to describe the phenomenon of genetic "recipe." Innumerable phenomena in nature are self-ordered or complex without being instructive (e.g., crystals, complex lipids, certain polysaccharides). Other complex structures are the product of digital recipe (e.g., antibodies, signal recognition particles, transport proteins, hormones). Recipe specifies algorithmic function. Recipes are like programming instructions. They are strings of prescribed decision-node configurable switch-settings. If executed properly, they become like bug-free computer programs running in quality operating systems on fully operational hardware. The cell appears to be making its own choices. Ultimately, everything the cell does is programmed by its hardware, operating system, and software. Its responses to environmental stimuli seem free. But they are merely pre-programmed degrees of operational freedom.
I hope readers get a glimpse of truth from the preceding document parts shared on why Random and/or Ordered Sequences alone cannot account for life. It takes a Code. Error-Correction cannot operate blindly, without Prescribed Knowledge. During replication we see... a) Proofreading: what to Monitor, Identify, and locate b) Edit, correct, replace damaged information c) Mismatch: corrects base mispairings, identify, cut, replace, and actually seals the gap back up at cut/replacement area d) if not repairable, apoptosis - designated cell death After replication, DNA can still be damaged. In this case Enzymes come to the rescue: Direct Reversal - reverse a reaction error back to base Base Repair - Again, Identify, cut, remove damage base and then replace with correct base, seal up the gap. Ha! Amazing. Error Correction = Design Functionality based upon prescribed information to verify and replace with correct replacement information. This lets us know the system is a communicative networked system of coded branches, loops, gotos, If-Thens, and sub-routine calls based upon decision making, processing nodes. Not done by determinant laws. In that case, you would need a specific Law for every type of different damage, signal, interaction, processing and... geesh, the Laws would crush each other. Or one Law would be undone, while the other functioned. They would cross each other off. It would be total chaos. Information requires rules based, language interface, translation and syntax structure. This is what we see. It is why they name it, call it, decipher it - The Code of Life. Encoded, multiple Codes and layers on top of another. An epigenetic Code of Regulation and Functional systems monitor, identify, organize, direct, edit, splice, sensor, recognize, send, interpret, respond, locate, correct, or heal damage. If it were merely law-like determinism, you would have Laws for each Code, each action, each reaction. Enumerable laws. Laws are general purpose qualifiers and do not create intimate, interactive control systems. This is a category mistake of pressing Order into Organization and Function. Order does not create algorithmic, programmatic functionality. It merely attains simplified, repeated sequences, like water crystals - snowflakes. The fact a code exist, multiple Codes(like Ubquitin Code - see Gpuccio's other great post) shoots down law as an explanation of life and functionally organized sequence space. For an excellent read and OP by Gpuccio on the Ubiquitin System, see here: The Ubiquitin System: Functional Complexity and Semiosis joined together. In the OP, Gpuccio post many scientific papers showing how the Ubiquitin Code Tags, Reads, Writes, Erases and does accurate Post-Translation-Modifications. None of this by blind, random processing, or by law-only, but by Code. Even Darwinist and evolutionist all recognize we have multiple Codes. If it were all deterministic, law-like order, it would shock the world of scientist working on Code who daily decipher codes in the Genome and Epigenome. We are Coded Beings, not crystals, not snowflakes. The information stored in our cells must be compressed, decompressed, transcribed, translated, proofread, error-checked, error-corrected, modified-if-tagged, and finally codified and docked with other proteins in a complex working and organizing system of multiple functions by the tens of thousands in Eukaryotes. Each constantly monitored for damage, communicated for repair or tagged for death and recycling, trillions of times a day. It is unlike any other code we know in the world and programmers only wish they could reproduce it's efficiency. Just ask Bill Gates and others. We are life with a free will to consider big questions about Life, the Universe and Everything ;-) https://www.youtube.com/watch?v=aboZctrHfK8 Now, about that Ubiquitin Code, as posted by Gpuccio, here is a another paper he posted(Upright BiPed enjoyed this). ...the lingua franca of cellular communication. The E2 ubiquitin-conjugating protein Ube2V2. "...a Rosetta Stone Bridging Redox and Ubiquitin Codes, Coordinating DNA Damage Responses" So not only is Damaged DNA recognized, but the response system is tightly controlled, highly organized and coordinated by a complete system regulator that tags proteins with specific markups. :) wow... amazing, Functional Sequence Complexity and it's happening at nano levels in billions to trillions of cells daily at incredible speeds. But I repeat myself. To know the answers, you have to ask the right questions. Otherwise, the assumptions turn up "Junk." Scientist are not turning up Laws in this De-coding work they do on a daily basis, although some laws are discovered and exist. The bulk of their work however is deciphering Code in our DNA, regulator Code, tagging code, sugar codes and more Codes. From evolutionist and Code Biology - Barberi and others, I blockquoted some more information on the many Codes... Coding Rules... they are Arbitrary - not dictated by Laws and...
What is essential in all codes is that the coding rules, although completely compatible with the laws of physics and chemistry, are not dictated by these laws.
and...
The key point is that there is no deterministic link between codons and amino acids since it has been shown that any codon can be associated with any amino acid </b<(Schimmel 1987; Schimmel et al. 1993).
Like this OP, Gpuccio does excellent work in responding to people's questions on the Ubiquitin Code. It's a great read alone as an OP, but the comments expose and unveil exquisite details of how "...functional complexity and semiosis" are "joined together" in the Ubiquitin System. DATCG
GPuccio @324 Scientism is well described by Rosenberg; 'The Atheist's Guide to Reality', Ch.2. Note the implicit philosophical determinism.
THE NATURE OF REALITY: THE PHYSICAL FACTS FIX ALL THE FACTS IF WE’RE GOING TO BE SCIENTISTIC, THEN WE HAVE to attain our view of reality from what physics tells us about it. Actually, we’ll have to do more than that: we’ll have to embrace physics as the whole truth about reality. Why buy the picture of reality that physics paints? Well, it’s simple, really. We trust science as the only way to acquire knowledge. That is why we are so confident about atheism. The basis of our confidence, ironically, is the fallibility of scientists as continually demonstrated by other scientists. In science, nothing is taken for granted. Every significant new claim, and a lot of insignificant ones, are sooner or later checked and almost never completely replicated. More often, they are corrected, refined, and improved on—assuming the claims aren’t refuted altogether. Because of this error-reducing process, the further back you go from the research frontier, the more the claims have been refined, reformulated, tested, and grounded. Grounded where? In physics. Everything in the universe is made up of the stuff that physics tells us fills up space, including the spaces that we fill up. And physics can tell us how everything in the universe works, in principle and in practice, better than anything else. Physics catalogs all the basic kinds of things that there are and all the things that can happen to them. The basic things everything is made up of are fermions and bosons. That’s it. ... There is no third kind of subatomic particle. And everything is made up of these two kinds of things. Roughly speaking, fermions are what matter is composed of, while bosons are what fields of force are made of. Fermions and bosons. All the processes in the universe, from atomic to bodily to mental, are purely physical processes involving fermions and bosons interacting with one another. Eventually, science will have to show the details of how the basic physical processes bring about us, our brain, and our behavior. But the broad outlines of how they do so are already well understood.
Origenes
Origenes: I agree with you. But scientism is not science. Indeed, it is an anti-scientific philosophy. gpuccio
GPuccio: … I think that almost all the events that science studies at the macroscopic level, those that are well described by classical physics, are deterministic …
It seems to me that scientism — science as the only begetter of truth — assumes that all events are deterministic. Full-fledged determinism — the idea of a causally closed physical world — is presupposed by scientism/naturalism. As I have argued many times on these pages, philosophical determinism is incompatible with rationality. In short, if all our thoughts and actions are determined by entities beyond our control, then we are not rational.
GPuccio: The interventions of consciousness on matter are a possible, interesting exception. If, as I (and many others) believe the interface between consciousness and matter is at quantum level, that would allow the action of consciousness to modify matter without apparently interfering with gross determinism. That would also explain how design takes place.
At the quantum level there is no causal closure, so this is where the spiritual — intelligent design — can “break in.” Origenes
Nonlin.org at #317:
Wow! How can I argue with you when you’re burying me under so many big words?
Maybe it's you who inspire me to write so much! :) Look, I really like your creativity of thought and you are a very honest discussant. And I agree with many things that you say, but I also strongly disagree with others. So, it's not that I like to contradict you, but when your creative thoughts begin to deny the essence of ID theory, which I deeply believe to be true, I feel that I have to provide my counter-arguments. In the end, I am happy that you keep your ideas, and I will keep mine. So, just to clarify what could still be not completely clear: 1. Of course. But one thing is the definition of design, another thing is the inference of design from an observed object. I use consciousness to define design. And I infer design from objective properties of the observed object. Again, you seem to conflate two different concepts. 2. I say that both order and function can be valid specifications. However, in the case of order we must be extremely careful that order is not simply the result of law (like in the case of an unfair coin which gives a series of heads). Function, instead, when implemented by a specific contingent configuration, has no such limitations. Moreover, in biology it's definitely function that we use to infer design, and not order. In the case of the watch, as explained, order of the parts and the function of measuring time can both be used to infer design, but the inference based on function is much stronger, and it implies, as necessary, the "order" of the parts. 3. No. I use determinism correctly. You use the word to mean "a worldview where only determinism exists. Again, you conflate a concept (determinism) which can pe applkied to specific contexts, with a philosophy (a merely deterministic view of reality) which is all another thing. 4a. No, your object is designed and it's the design in it that continues to operate. The design, again, is certainly based on understanding of laws, and operates using laws: the light turn on powered by solar energy because you arranged things for that to happen. It's the configuration that counts, and the configuration is there because you designed it. Gizmos don't go in orbit with solar cells and all the rest because some law makes that happen spontaneously. Moreover, I could reach your gizmo and change the design in it. I could arrange things so that the light goes on only when the moon is visible. And the gizmo would go on that other way, after my explicit design intervention. A watch goes on measuring time after it has been designed, without any other conscious intervention (at least as long as it has the energy to do that). Again, you are conflating design with law. But you are wrong. b. There is nothing poorly defined: i have given the full definitions many times, for example at #167, #199, #200. Functional information: the number of bits necessary to implement a function: -log2 of the target space/ search space ratio. Complex: if it is more than an appropriate threshold: in the general case, 500 bits. c. For any function that can be implemented by an object we can measure functional information. If that measure s more than 500 bits, for any defined function, we can infer design. We are not trying to divine the intentions of the designer. We reason on what we observe. If a complex function is there, it is designed: maybe that function was the real purpose of the designer, or it is only part of some other purpose and function. It is not important: the function is there, and it has been designed, if it is complex enough- d. What is it that you don't understand? If, say, 200 AAs must be exactly what they are, because otherwise the function is lost, then you have more than 800 bits of functional information (4.3 x 200). e. If the best function that I can imagine and define for the computer is being used as a paperweight, then I will not infer design for it, because that function is simple. It will be a false negative, like many others. f. As explained by Origenes, it seems that only you are making that specific objection. The best darwinists do is to invoke NS, which is a very indirect process with a necessity component in it, but not certainly a law. And, of course, NS cannot do what they think they do. Do you think that darwinists are more disturbed by your arguments than by ID's arguments (including mine)? Maybe, but I would not bet. So, in the end, you can remain of your mind. No problems. But your views are not compatible with ID theory, not as you express them. As for me, I stick with ID theory. Your views are certainly interesting, but under many aspects simply wrong. gpuccio
I need to add @ 316 that the procedural geometry algorithms would be primitive and primitive strip building and welding and such. You could of course make procedural generation that only produced convex polytopes; but then you'd be implying that every polymer or mass of tissue that could be encoded by DNA was selectively positive. LocalMinimum
Nonlin @
But let me guess: you still don’t get any buy-in from the Darwinistas :) They still say “no, what looks like a function to you is just a law of nature”, right?
Uh, no. At this forum we have seen a lot of crazy and confused arguments from "Darwinistas", but never this one — probably because biological functions do not resemble laws of nature at all. Origenes
NonLin: as was discussed repeatedly above and over years, it is a fairly common challenge to have to identify something as designed without direct access to the designing agent. This is routinely done by applying a type of inductive reasoning often seen in the sciences, inference to the best empirically based explanation. Here, by establishing reliable signs of design. when such are observed we are warranted to inductively infer design as cause. In this case, various forms of functionally specific, complex organisation and associated information are such signs, backed by a trillion member observation base and the associated blind search challenge in configuration spaces. Kindly see the onward thread here: https://uncommondescent.com/intelligent-design/what-is-design-and-why-is-it-relevant/ To overturn such inference, one would need an observed counter example of FSCO/I beyond relevant thresholds observed to originate by blind chance and/or mechanical necessity. On the trillion member observation base, that has not been done. All of this accords with Newton's vera causa principle that explanations should be based on causes seen to be adequate to cause effects. Yes, actually observed. The so-called methodological naturalism principle unjustifiably sets this aside and ends up begging the question. KF PS: The common objection that cell based life reproduces does not apply to the root of the tree of life, origin of the von Neumann, coded information using kinematic self-replicator is antecedent to reproduction and is a case of FSCO/I. PPS: As a concrete example, notice how functional text is based on particular components arranged in a specific, meaningful order. Likewise, how parts are arranged in any number of systems, including biological as well as technological ones. Disordering that arrangement beyond a narrow tolerance often disrupts function. This is the island of function phenomenon. Such is anything but meaningless. kairosfocus
1. Your definition of design might be simple, but we also need to identify design when we cannot observe the agent and his/her “conscious representation”.
We do it all of the time. Did you have a point?
“Complex functional information” – three words poorly defined and combined to mean something to you, but nothing to me.
They are only "poorly defined" to the willfully ignorant ET
gpuccio@307 Wow! How can I argue with you when you’re burying me under so many big words? :) Let me try to answer just a few of your points: 1. Your definition of design might be simple, but we also need to identify design when we cannot observe the agent and his/her “conscious representation”. 2. Sorry, I did not read Paley’s original argument so can’t comment directly on yours versus his. This is just a summary: “Paley tells of how if one were to find a watch in nature, one would infer a designer because of the structure, order, purpose, and design found in the watch.” I say “structure (=order) is enough” while you seem to say “purpose”. 3. Determinism has a certain definition everyone knows. Maybe you should use a different word if you mean something else. 4. And the main disagreement is… your claim: “Design is absolutely different, and distinguishable, from law.” 4 a. You say: “laws operate without any conscious intervention, as far as we can observe”. What if I design and send into orbit a gizmo with a light that turns On whenever the sun is in sight (powered from solar energy)? Can you see this is a law that operates without any conscious intervention 100 and 1000 years from now? b. “Complex functional information” – three words poorly defined and combined to mean something to you, but nothing to me. How can I reply? And “harnessing of specific contingent configurations” doesn’t help. c. You: “We don’t need the agent to assess function. ATP synthase can build ATP from a proton gardient in the cell”. Yes, but that seems a mechanism, not the function. In my example above, how do I know when to turn on the light? By detecting the sun rays via some mechanism. But the function of the gizmo is likely different and only the designer knows it. And what about my older example of a nonfunctional sculpture of a watch? That’s just esthetic and certainly cannot measure time but it’s still designed. d. I don’t understand what you mean: “how many specific bits (in terms of necessary AA positions) are needed for ATP synthase to work as it works?” e. You: “Instead, the complexity linked to the computer function (a function that our object can certainly implement) is very high”. But say you discover this computer cca. 1800 so you know nothing about computers. How do you do your analysis? At that time the computer looks like a paper weight at best. f. You: “If we can show even one single explicitly defined function that the object can implemet as defined, and which is complex (see next point for that), we can safely infer that the object is designed.” Perhaps. But let me guess: you still don’t get any buy-in from the Darwinistas :) They still say “no, what looks like a function to you is just a law of nature”, right? Ok. Looks like you have your method and I have mine which is much simpler... and simplicity matters as the “selfish gene” and “natural selection” soundbites show... and I account for designed art while you don't... and you might believe the laws of nature are never changing under any circumstances, but who the heck am I to tell God: "don't walk on water because of gravity"? Nonlin.org
gpuccio @ 311: Thank you. We could extend the illustration by having selectable functionality be analogized by closed volumes/unions of convex polytopes (which could also be selectable by an artist). In this case, more complex configurations could be stored in more ways in the geometry buffer, i.e. the more there is to draw the more ways there is to draw it (in order if nothing else)...however, each additional vertex/draw order index can be configured to produce far more degenerate geometries (inconsistent winding orders, open shapes/shapes with unenclosed volume). Thus, the ratio of configurations that produce clean, properly closed volumes to that which produces half-invisible junk is well below unity for each additional component, and thus the relative growth of configuration space/shrinking of functionality as terms are added. We could also knock this back a level of emergence, change the domain/codomain/mapping function from the geometry data(physical config)/rendered volume(function)/shaders(physical law w/r to biological ops) to procedural geometry generation parameters(DNA)/geometry data(physical config)/procedural geometry generation algorithm(chemistry/physics w/r to emergence of DNA encoded processes) and see the same, i.e. that the number of ways to encode a structure may grow, but the functional/non-functional ratio being below unity results in shrinking targets. I expect it's pretty easy to see this shrinkage to be transitive given mapping by both of these functions or their properly ordered composite as a relation. Thus it's also true, and amplified, when mapping DNA directly to biological function. LocalMinimum
ET, search challenge delivers as close a disproof as an empirically based, inductive case gets. Searching 1 in 10^60 or worse of a config space (on generous terms) and hoping to find not one but a large number of deeply isolated needles, is not going to work. In short he demands to infer to statistical miracle in the teeth of the same general sort of statistical challenge that grounds the statistical form of the second law of thermodynamics. KF kairosfocus
Rumrat is over on TSZ not only equivocating but asking us to prove a negative- we need to prove that evolution cannot produce 500 bits of CSI. It isn't about evolution- see comment 308, follow and read the essay linked to. And evos are saying that evolution by means of natural selection and drift (blind and mindless processes) produced the diversity of life. That means the onus is on them to demonstrate such a thing. However they are too pathetic to understand that. ET
KF: Thank you. Very good work! :) gpuccio
I decided to headline the just above on defining design: https://uncommondescent.com/intelligent-design/what-is-design-and-why-is-it-relevant/ KF kairosfocus
LocalMinimum: Good thoughts. In the end, the concept of contingent configurations linked to the implementation of a function is simple enough. Contingent configurations are those configurations that are possible according to operating laws. Choosing a specific contingent configuration that can implement a desired function is an act of design. If we can only observe the object, and not the design process, only the functional complexity, IOWs the utter improbability of the observed functional configuration, can allow a design inference. Simple contingent configuration can implement simple functions. But only highly specific contingent configurations can implement complex function. Highly specific contingent and functional configurations are always designed. There is no counter example in the whole known universe. gpuccio
H'mm, it seems the definition of design is up again as an issue. The simplest summary I can give is: intelligently directed configuration, or if someone does not get the force of "directed," we may amplify slightly: intelligently, intentionally directed configuration. This phenomenon is a commonplace, including the case of comments or utterances by objectors; that is, the attempted denial or dismissal instantly manifests the phenomenon. Going further, we cannot properly restrict the set of possible intelligences to ourselves or our planet or even the observed cosmos, starting with the common factor in these cases: evident or even manifest contingency of being. Bring to bear that a necessary being world-root is required to answer to why a contingent world is given that circular cause and a world from utter non-being (which hath not causal power) are both credibly absurd and we would be well advised to ponder the possibility of an intelligent, intentional, designing necessary being world-root given the fine tuning issue. The many observable and empirically well-founded signs of design manifest in the world of life (starting with alphanumeric complex coded messages in D/RNA and in associated execution machinery in the cell) joined to the fine tuning of a cosmos that supports such C-Chemistry, aqueous medium cell based life suggests a unity of purpose in the evident design of cosmos and biological life. Taken together, these considerations ground a scientific project and movement that investigates, evaluates and publishes findings regarding such signs of design. Blend in the issues of design detection and unravelling in crypography, patterns of design in computing, strategic analysis, forensics and TRIZ the theory of inventive problem solving (thus also of technological evolution) and we have a wide-ranging zone of relevance. KF kairosfocus
Eh, lets make a graphics engine, and have procedural geometry generation that takes parameters and produces geometry data (vertices, textures, etc) and sticks it in buffers to be fed to shader programs running in the GPU. Now, the parameters we feed to the procedural geometry generation algorithms would be our DNA; the algorithms themselves would be the DNA translation/structure emergence process; the geometry/texture data would be the physical configuration of the biological system; the shader programs would be physical laws as they relate to biology; and the geometry displayed would be the functionality of the biological system. What is generally being spoken of about information requirements is what of the procedural procedural geometry generation algorithms/DNA can give rise to certain rendered geometries/biological functions (right?). UA's argument about atom configuration/physical structure conflates the geometry data/physical configuration with the procedural algorithms/DNA. Thinking about it, though, this is a pretty common error. Both can be approached as information, so confusion is readily available. LocalMinimum
Weird, the link in comment 306 didn't work. Intelligent Design is NOT Anti-Evolution There, much better ET
Nonlin.org: c) I am not restricting anything. My point is that in a scientific approach, we apply the methodology that is appropriate for what we are studying, according to the facts we know. My statement was: "That said, I think that almost all the events that science studies at the macroscopic level, those that are well described by classical physics, are deterministic, in the sense that they can be very well described in terms of necessity (the laws of classical physics), or in other cases in terms of non intrinsic probability (that kind of porbailistic description that is only a way to describe deterministic systems with many independent and not known variables, like in the case of dice or of polygenic traits." IOWs, we use classical physics and classical probability for these scenarios, becasue we know very well that they explain these scenarios satisfactorily. Quantum mechanics was developed to explain facts that could not be explained by classical physics. It explains those facts very well, and it must be used to explain those facts, not those that are well explained by classical physics. Dark matter and dark energy are examples of facts that are not well explained by what we know. Therefore, it i absolutely correct to look for other explanations. But there is no reason to look for new explanations for the trajectory of a macroscopic object like a die, when subject to known forces of classical mechanics. We already know whow to descrbe that scenario satisfactorily. d) I am confident that we will be able to study qunatum effects wherever they are relevant, including biological scenarios. Quantume mechanics is another form of regularity, even if different from the regularities of classical physcis. e) I agree that micro and macro are not completely separate, and indeed the separation between classical scenarios and quantum scenarios is still not really understood. Certainly, it is not only a question of big and small. But big and small do count. It is a fact that most of macro-events can be well described by classical physics, and subatomic events require qunatum theory as a default. So, whenever we are studying scenarios for which we know well what to use, we can confidently use what works. I have never said that the "scientific method should be restricted to classical physics". You are really misundrestanding what I think. Both classical physics and quantum mechanics are very good applications of the scientific method, and both work perfectly if applied in the right contexts. Both are theories about mathematical regularities that can explain facts. They are, of course, different theories. f) You say: "Your definition for design is cumbersome, full of unclear concepts and untestable – how can you measure meaning and intentionality?" I have never tried to measure meaning and intentionality. I just recognize that they exist, that they are observable subjective experiences. We know that we experience and use the personal intuition of meaning when we design something, and we also experience and use the personal experience of desire and intention. What I measure are the results of the conscious process of experiencing meaning amd purpose when they originate a design process: complex functional information, an objective property of objects that empirically is known to derive only from a cosncious and intentional intelligent design. My definition of design is clear, and in no way cumbersome and untestable. You can find it in my first OP here: Defining Design https://uncommondescent.com/intelligent-design/defining-design/ Design is any process where a conscious agent outputs his conscious representation to some material object. It's very simple and clear. If the form in the object derives, directly or indirectly, form subjective representations that existed before the design process takes place, that is design. Nothing else is design. Design can be simple or complex. When it is complex, it generates a specific property in the object, what we call complex functional information. As only a design prcess can generate that property, as far we we can observe in the whole universe, we can use that property to infer a design origin for an object, when we don't know its origin directly. This is not cumbersome at all. It's essentially Paley's argument, in a more detailed and quantitative form. You ask: "How is design related to determinism?" It is not related to determinism, if not for the fact that the subjective representations precede and are in a sense a cause of the final configuration. But it is not really a classical deterministic relationship. Of course, the intelligent agent who designs uses his understanding of meaning, as said, to find how to implement the functions he conceives in his conscious experiences (his desire and purpose). Understanding laws is of course part of that process. So, we design complex machines using our scientific understanding of scientific laws, which are of course deterministic. But a watch is the result of our understanding of laws, not of the laws themselves. The key point is always the conscious subjective experience. As said many times, I am not a determinist: I belive in free agents, and in free will. Thereofre, the definition of determinism that you quote is not true, for me. You must not confound believing that many things in reality are deterministic (which is what I believe, and what science correctly assumes) with believing that all reality is merely deterministic (which is a philosophical worldview that I completely refute). I believe in a deterministic approach to understand the aspects of reality that are deterministic, but in no way I believe that all reality is deterministic. As said many times, design, whcih is of course a major part of reality in my worldview, is not deterministic, because it is strictly connected to free will. You ask: "Are you proponent of “Predestination” Not at all. But I believe that everything that exists in the physical plane, including us, is subject to many deterministic influences, even if we, as free agents, are never completely determined by those influences. You say: "Did I link design and determinism? Don’t think so. All I said was that design is indistinguishable from “law”." But that is exactly the point with which I strongly disagree. Design is absolutely different, and distinguishable, from law. First of all, design requires conscious representations, by definition (see above), while laws operate without any conscious intervention, as far as we can observe. Second, design can generate complex functional information in objects, and laws cannot do that. Remember, complex functional information is the harnessing of specific contingent configurations towards a desired function. No law can do that. Threfore, design and law are two different things, and they can be perfectly distinguished. g) You say: "Not at all clear. My objection here was that function depends on an agent which we don’t see. And complexity also seems dependent and function hence on the agent." No. Function is what the object can do. We don't need the agent to assess function. ATP synthase can build ATP from a proton gardient in the cell. That is a fact. We need no agent to assess that. Complexity is objectibe too. We just ask ourselves: how many specific bits (in terms of necessary AA positions) are needed for ATP synthase to work as it works? Again, no reference to an agent is necessary to ask and answer that question. The point is that we know empirically that, if we observe complex functional information, we can safely infer a design origin, and thereofre a conscious agent. But that is an inference from what we objectively observe. You say: "And of course, same object can have different functions for different agents (example: the family computer that even the cat can play with)." Of course. But that's not important. I have made many times the example that a notebook computer can certainly be used as a paperweight. Why not? But the point is that the paperweight function is simple, while the computer function is very complex. Our object can implement both functions, and probably many more: it could be used, for example, as a weapon. But we will not infer design for the paperweight function, or for thw weapon function, because for that function the complexity needed is very low: any solid body with a few generic restrictions will do. Instead, the complexity linked to the computer function (a function that our object can certainly implement) is very high: we can certainly infer design for that function that we are observing in the object. Now, please go back to my comment #199, and read it again. For your convenience, I post here the relevant part:
1) Yes, my definition of FSI does use “a particular intelligent agent and a very specific function”. But it does not depend on them. Why? Because any oberver can define any function, and FSI for that function can be measured objectively, once the function is objectively and explicitly defined. IOWs, I can measure FSI for any explicilty defined function that the object can implement. So, is there an objective FSI for the object? Of course not. But there is an objective FSI fo each explicitly defined function that the object can implement. Now, please, consider the following point with great attention, because it is extremely important, and not so intuitive: If we can show even one single explicitly defined function that the object can implemet as defined, and which is complex (see next point for that), we can safely infer that the object is designed. Excuse me if I insist: stop a moment and consider seriously the previous statement: it is, indeed, a very strong statement. And absolutely true. One single complex function implemented by an object is enough to infer design for it. Another way to say it is that non designed objects cannot implement complex functions. Never.
When I say: "Stop a moment", I really mean it! :) gpuccio
Bill Cole- The TSZ ilk love to equivocate. They flat out refuse to understand that Intelligent Design is NOT anti-evolution even though it has been explained to them. They are a truly pathetic lot and a total waste of time. ET
gpuccio Thank you. I have a question but I will wait for a reply from TSZ. bill cole
bill cole: What do they want to know? I have explained it myriads of times. As said, I use conservation through long evolutionary times to measure functional constraint. The alpha and beta chains of ATP synthase have been hughly conserved for maybe billions of years. That is a very high evolutionary time. The bitscore between the human form and the bacterial form is a very good measure of how much the sequence is conserved, and therefore of its functional constraint. It expresses the probability of finding that level of homology by chance (Indeed, the probability is the E value, which is directly related to the bitscore. Unfortunately, the E-value is set to 0 when it becomes lower than some threshold, and therefore cannot be used as a measure for the high levels of functional information we are discussing here). Therefore, the bitscore is an indirect measure of the functional information in the sequence. As said, the two chains in ATP synthase have more than 1000 bits of functional information, as evaluated by the bitscore between bacteria and humans. As explained many times, these two sequences have been exposed to at least 1-2 billion years of neutral variation since the split of bacteria from human lineage. More than enough to change all that could change. 400 million years are more than enough for that, as many times debated. 1-2 billion years are really much more than enough. This is just the essence of the reasoning. Again, I don't know what they really are asking for. gpuccio
ET All this is true. Gpuccio is claiming a limit of 500 bits for blind and unguided processes. I think he has a very good argument and I would like to run as hard with it as possible. Joe F had an argument for natural selection adding 500 bits of information to the genome which was deeply flawed and I think he knows it at this point. Gpuccio did an excellent job of pointing out the weaknesses in Joe's argument. bill cole
gpuccio@275 a. Agree, but you probably mean ‘the scientific method’ because ‘science’ already means ‘knowledge’ from Latin. b. Agree c. Yes, we try to understand the deterministic component, but why is restricting research so important to you? In a first phase we try to just clarify a phenomenon without even worrying about causes. Example: “is there such thing as black matter, and if so what properties does it have?” d. Probably on quantum effects. And even if we tried, studying quantum effects might not be feasible. e. Micro and macro are not separate worlds. The micro impacts the macro for sure. I don’t think the scientific method should be restricted to classical physics. No one can enforce such a restriction anyway. f. Your definition for design is cumbersome, full of unclear concepts and untestable – how can you measure meaning and intentionality? How is design related to determinism? See definition of determinism: “the doctrine that all events, including human action, are ultimately determined by causes external to the will.” Are you proponent of “Predestination”?!? Did I link design and determinism? Don’t think so. All I said was that design is indistinguishable from “law”. g. Not at all clear. My objection here was that function depends on an agent which we don’t see. And complexity also seems dependent and function hence on the agent. And of course, same object can have different functions for different agents (example: the family computer that even the cat can play with). Nonlin.org
gpuccio Would you mind doing a work up of how you calculate the information content of ATP synthase and the Prp8 gene. A request came from TSZ. bill cole
gpuccio (298): Your comment sufficiently satisfies my curiosity about those two papers at this point. Thanks. OLV
uncommon_avles @ 286: So you admit to committing the tu quoque fallacy. Mung
OLV at # 296: The paper about near-cognate tRNAs is interesting and very complex. It is esentially about control, flexibility and stability of the translation process, and the role of modifications in the tRNA and ribosome. It is interesting, and I need some time to read it in detail. However, at present it is probably not so relevant to our discussion. The paper about the evolution of tRNAs is much less interesting, IMO. While the evolution of tRNAs can be interesting, its possible connections with hypothetical models of evolution of the genetic code seem to be pure imagination. gpuccio
Bill Cole @ 278- Ask Rumraket how it determined that the sequence provided evolved by means of blind and mindless processes. That's the point. It is also about blind and mindless processes adding information and not just mutations. For all we know the mutations are part of the design of the organisms. They talk about gene duplication adding information but two copies of the same book is not more information. Also what they need to do and cannot is demonstrate that gene duplication followed by changes that make it code for a different protein was accomplished by blind and mindless processes. And the newly duplicated gene needs a new binding site before it can be expressed. That means their position has more issues to explain but cannot. ET
OLV (281):
Does this relate to semiosis too? Deciphering the reading of the genetic code by near-cognate tRNA PDF
Please, would somebody comment on this too? Rooted tRNAomes and evolution of the genetic code BTW, note other tRNA-related papers referenced in the same webpage. Specially interested in reading Upright BiPed's comments, because I like his interesting website "Biosemiosis", but also would like to read comments from gpuccio and kairosfocus. Obviously, other commenters are also welcome. Thanks. OLV
uncommon alves:
The flagella’s structure is used by ID to show complexity.
That is false. The flagella’s structure is used by ID to show specified and irreducible complexity. Huge difference.
Isn’t that conflating form with information ?
It takes information to get the correct sequences for the proteins. And it takes information to assemble those proteins into the proper configuration. Francis Crick said that: Information means here the precise determination of sequence, either of bases in the nucleic acid or on amino acid residues in the protein. ET
gpuccio, That makes sense. Thanks. OLV
OLV at #283: "I think the ID folks associate Functional Specified Complex Organization with Functional Specified Information, but they don’t always quantify it." They are two names for the same concept. Of course, we only quantify functional information when it is empirically possible to do it. gpuccio
PS: Orgel, 1973:
living organisms are distinguished by theirspecified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . . . [HT, Mung, fr. p. 190 & 196:] These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure.
[--> this is of course equivalent to the string of yes/no questions required to specify the relevant J S Wicken "wiring diagram" for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002, also cf here, -- here and -- here -- (with here on self-moved agents as designing causes).]
One can see intuitively that many instructions are needed to specify a complex structure. [--> so if the q's to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions.  [--> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes [--> Orgel had high hopes for what Chem evo and body-plan evo could do by way of info generation beyond the FSCO/I threshold, 500 - 1,000 bits.] [The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196.]
kairosfocus
UA, When form is functionally specific, grounded in particular arrangements and coupling of parts to make a working whole, that "form" is inFORMational in the sense of functionally specific, complex organisation and associated information. That is, the form is informational. This is readily seen from something I have routinely pointed to. Namely, how some reasonable and effective description language may specify the requisite form as a structured sequence of answers to Y/N Q's, AutoCAD being a capital example. Nor is this insight a dubious notion of "those IDiots" or the like. In the specific context of functional forms found in the world of life, it was put on the table across the 1970's by Orgel and Wicken. In fact, that was documented as part of the formative influence behind the original ID works, e.g. Thaxton et al in TMLO, c 1984. On irreducibly complex cores of functionally specific structures, the just outlined obviously strongly applies. Indeed, IC is one particular manifestation of FSCO/I. It is fairly common as anyone familiar with the need to have the right car part, properly installed, can testify to. When it comes to typical attempts to dismiss the significance of IC, it should first be noted that knockout gene studies commonly used to identify function work by disabling functional wholes by blocking a relevant, targetted part. So, the rhetoric of dismissal distracts from a highly material fact: IC is known to be common in biology to the point of being exploited experimentally to draw conclusions on gene function. Nor is this a novelty in this context, in the notoriously badly ruled Dover trial, Scott Minnich reported as an expert witness on how such studies were applied to the flagellum. The significance of that was of course suppressed in the ruling and in the reporting. That news reporting is demonstrably a case of agit-prop media trumpeting to push an ideologically loaded narrative regardless of credible countering facts. (For a current case in point on such media bankruptcy on a story, kindly see here -- note the date on the report.) Going beyond, the commonly encountered exaptation argument fails, exploiting failure to connect dots on why IC exists. Menuge's five criteria apply:
IC is a barrier to the usual suggested counter-argument, co-option or exaptation based on a conveniently available cluster of existing or duplicated parts. For instance, Angus Menuge has noted that:
For a working [bacterial] flagellum to be built by exaptation, the five following conditions would all have to be met:
C1: Availability. Among the parts available for recruitment to form the flagellum, there would need to be ones capable of performing the highly specialized tasks of paddle, rotor, and motor, even though all of these items serve some other function or no function. C2: Synchronization. The availability of these parts would have to be synchronized so that at some point, either individually or in combination, they are all available at the same time. C3: Localization. The selected parts must all be made available at the same ‘construction site,’ perhaps not simultaneously but certainly at the time they are needed. C4: Coordination. The parts must be coordinated in just the right way: even if all of the parts of a flagellum are available at the right time, it is clear that the majority of ways of assembling them will be non-functional or irrelevant. C5: Interface compatibility. The parts must be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’: even if a paddle, rotor, and motor are put together in the right order, they also need to interface correctly.
( Agents Under Fire: Materialism and the Rationality of Science, pgs. 104-105 (Rowman & Littlefield, 2004). HT: ENV.)
In short, the co-ordinated and functional organisation of a complex system is itself a factor that needs credible explanation. However, as Luskin notes for the iconic flagellum, “Those who purport to explain flagellar evolution almost always only address C1 and ignore C2-C5.” [ENV.]
KF kairosfocus
uncommon_avles: Thanks to you! :) gpuccio
gpuccio @ 287,288 Ok. The concept is more clear to me now.Thanks uncommon_avles
uncommon_avles at #286: Nobody is conflating form with information. It's not form that counts, but function. And, in particular, functional information, the specific contingent bits necessary to implement that function. See my comment #287. The same reasoning that I have shown for the hexameric component of the F1 part of ATP synthase can be applied to the flagellum. It's not the form, but what the form can do because of its specific, contingent configuration. gpuccio
uncommon_avles at #282: The flagella’s structure is used by ID to show an example of irreducible complexity. Of course, each component of the bacterial flagellum is probably functionally complex. The point of IC is that, if there is an irreducible core for a function, then the functional complexity must be computed for that core, because the function is implemented only if all the individual components of the core are present. Therefore, in that particular case, we have to multiply the probabilities (and therefore to sum the functional complexity) of the components. I will give you a very simple example of IC: the alpha and beta chains of ATP synthase. Those two chains form a structure (the F1 hexamer) that is irreducibly complex. No single chain can implement the function, both are necessary. Therefore, their functional complexity (which is, as estimated by the e. coli - humans conservation, 561 bits for the alpha chian and 663 bits for the beat chain) must be summed, and the functional complexity of the F1 hexamer becomes 1224 bits, because of the IC of the structure. gpuccio
Upright BiPed @ 284 This is what you said @ 270 You might want to keep in mind that when you make statements about a “500 bit threshold” you are talking about the measurement of a description (a specification) encoded in a medium of information. The ‘energies and constituents’ of an atom is not a description of the energies and constituents of an atom. You are making an (anthropocentric) category error; conflating form with information. and I have answered thus:
The flagella’s structure is used by ID to show complexity. Isn’t that conflating form with information ?
You are the one who is choosing not to address that contradiction. If you can be clear about what is the relation between probability, bit and measurement of an 'encoded medium'(whatever that is), I can attempt to answer it. uncommon_avles
bill cole at #278: I don't know what Rumracket's point is, if he really has a point. However, the answer is easy enough. That's what I would do. I would translate the nucletide sequence and get the AA sequence of the protein. Then I would BLAST it, and reconstruct its evolutionary history. If the protein has a human form, I could use my procedure to evaluate human conserved information, but of course conserved information can be evaluated along any approipriate evolutionary line of descent. If the protein exhibits conserved information through some long enough evolutionary separation, let's say 400 million years, I would simply consider the level of conserved information as given by the BLAST bitscore as a reliable measure of its functional information. If the bitscore is above 500 bits, I would definitely infer design for that functional information. Of course, there is always the direct approach. The protein can be studied in the lab, and mutational studies can be implemented, and someone could dedicate his own life to research the relationship between sequence and function for that protein. With all the obvious limitations of that direct approach. Maybe Rumracket could finance the research. An important point: if there is not enough information from the evolutionary history of the protein about its function and functional conservation for long evolutionary times, I would simply not make any design inference for it. It's simple enough. gpuccio
ua,
You seem to think that information (bits) is something different than probability of structure/pattern/ event/process.
What an utterly useless conceptualization. I said upfront that commenting to you again was against my better judgment. I should have listened. You entered this thread arguing that a “500 bit” informational threshold was a “farce” because at the atomic level (i.e. the specificity of energies and constituents within an atom) everything on earth is above the threshold. I pointed out that you are conflating the measurement of an encoded medium with something that isn’t a medium to begin with, and thus, doesn’t encode any information. You didn’t address that contradiction. Upright BiPed
uncommon_avles, I think the ID folks associate Functional Specified Complex Organization with Functional Specified Information, but they don’t always quantify it. However, gpuccio can explain this better. OLV
Upright BiPed @ 270
You might want to keep in mind that when you make statements about a “500 bit threshold” you are talking about the measurement of a description (a specification) encoded in a medium of information. The ‘energies and constituents’ of an atom is not a description of the energies and constituents of an atom. You are making an (anthropocentric) category error; conflating form with information.
The flagella’s structure is used by ID to show complexity. Isn't that conflating form with information ? You seem to think that information (bits) is something different than probability of structure/pattern/ event/process. It is not. It is simply the -Log2 of probability. Log is used for convenience, as probability is multiplicative while Log is additive. Negative Log is used to assign more ‘information’ to less probability. At the end of it all, ‘information’ is just the -Log2 of probability. uncommon_avles
Does this relate to semiosis too? Deciphering the reading of the genetic code by near-cognate tRNA PDF OLV
OLV (279): Error correction: "Perhaps the post-transcriptional modifications..." OLV
bill cole: Perhaps the port-transcriptional modifications that lead to the mature mRNA make any reference to single genes seem a little vague or imprecise. Maybe I'm wrong, but I think gpuccio deals with BLASTing actual protein sequences. However he can clarify this better. You may kindly ask your interlocutor to tell you what they think of this: What is a gene? What's the current status of the neo_darwinian theory? Please, note that the sources of the above links are not ID-friendly. OLV
gpuccio Here is a comment from Rumraket to Mung at TSZ. Would love to hear your thoughts.
If I give you the DNA sequence for a protein coding gene that is known to be functional in some organism, will you do us a favor and calculate how much functional information that gene constitutes? Then we can proceed to analyze whether mutations that affect that function has an effect on the amount of information in the gene. Deal?
bill cole
KF, Excellent review. Thanks. OLV
OLV, Let's look at the opening para of that NCSE propaganda piece -- and yes, this is a known advocacy group fully meriting that description:
The origin of biological complexity is not yet fully explained, but several plausible naturalistic scenarios have been advanced to account for this complexity. “Intelligent design” (ID) advocates, however, contend that only the actions of an “intelligent agent” can generate the information content and complexity observed in biological systems.
Let's take it in stages, pointwise: >>The origin of biological complexity is not yet fully explained,>> 1: confession that they do not have an actual, viable, empirically, observationally justified account of how FSCO/I in living systems came about by demonstrated result of blind chance and/or mechanical necessity. 2: Had they had such, they would trumpet it, and there would be no biological ID case or movement. As, the design inference explanatory filter would be broken. 3: So, the way this begins gives away the game, they intend to impose methodological naturalism and ideologically lock out the only known, empirically grounded and search challenge plausible causal origin of FSCO/I. 4: Namely, design, or intelligently directed configuration; for which there is a trillion member observational base. 5: Note, this includes alphanumeric string based codes and associated communication and cybernetic systems, which are at the heart of cell based life. >> but several plausible naturalistic scenarios>> 6: Which are just-so stories without empirical warrant, or they would have been triumphantly announced as demonstrated fact rather than "plausible naturalISTIC -- a clue on ideological imposition -- scenarios." 7: Plausible, once the actually empirically founded source of FSCO/I has been locked out. >> have been advanced to account for this complexity.>> 8: Scenarios imposed in the teeth of empirical evidence. >>“Intelligent design” (ID) advocates, however, contend that only the actions of an “intelligent agent” can generate the information content and complexity>> 9: notice, scare quotes and dismissal as advocates rather than qualified scientists and scholars in their own right who are backed by empirical evidence on the origin of FSCO/I and analysis on the needle in haystack search challenge. 10: Notice, the loaded "can," where the trillion member observation base shows that the only, frequently, observed cause of FSCO/I is intelligently directed configuration, the act of intelligent agents. As NSCE exemplifies in its text. >> observed in biological systems.>> 11: So, from D/RNA and associated cellular execution machinery on up, FSCO/I is observed in living cell based life forms. 12: No actually empirically warranted case of blind chance and necessity creating it is on the table; while, a trillion member base and linked analysis shows that intelligent design can and does create FSCO/I. 13: In the case of D/RNA and linked execution systems, these had to be in place BEFORE you could have protein synthesising, self-replicating cells. 14: this is the province of physical and information sciences, including especially statistical thermodynamics, physics and chemistry. 15: These clearly point to one empirically warranted, needle in haystack plausible cause, intelligently directed configuration acting at the origin of cell based life as we know it. 16: But of course, that is ideologically locked out. KF kairosfocus
Nonlin.org: Let's see if we can find some common ground. I will do my best to clarify my position better: a) We seem to agree that science "is not about knowing for sure". That's good. But I would like to be sure that you agree with me that science is valid and important and useful, even is it is not about knowing for sure, indeed even more for that. b) I have never said that reality is "completely deterministic". You misunderstand me. I agree that quantum reality has a probabilistic component, and I believe that such a component is intrinsically probabilistic. Moreover, I believe that consciusness and free will are independent components of reality, and that they interact with physical reality, probably by consciously harnessing the rpobability component of a specific quantum interface. That certainly happens in humans, and it's my favourite model to explain design in the biological reality. So, my model of reality is not "completely deterministic". It allows a well defined space for intrinsic probability (quantum events) and for conscious and free interventions, which are neither deterministic nor probabilistic. c) That said, I think that almost all the events that science studies at the macroscopic level, those that are well described by classical physics, are deterministic, in the sense that they can be very well described in terms of necessity (the laws of classical physics), or in other cases in terms of non intrinsic probability (that kind of porbailistic description that is only a way to describe deterministic systems with many independent and not known variables, like in the case of dice or of polygenic traits. d) In all those cases, the scientific approach has no need to consider quantum effects, because they are irrelevant. The biochemical level is, in most cases, a deterministic scenario that has nothing to gain from considering quantum effects. But there are probably important exceptions. Photosynthesis is one of them, and there could be other biochemical scenarios where quantum effects are important. Certainly, they could be important at the level of neurons and synapses, where the interface with consciousness reasonably is to be found. e) In general, events at the level of subatomic particles are dictated by quantum mechanics, and macroscopic events are best described in terms of classical physics. As already said, there are exceptions, but exceptions do not change the simple fact that in most cases the scientific approach must be appropriate for the scenario we are describing. Again, science is precious because it is not about knowing for sure. But it is about knowing, definitely. f) That said, my arguments for design remain absolutely valid. Design is not determinism. It is the demonstrattion of the action of consciousness upon reality, and consciousness is neither deterministic nor probabilistic. Design is free will acting at the cognitive level, infusing matter with meaningful and intentional configurations. My main objection to your arguments is that I don't accept your confusion about design and determinism. They are two completely different things. g) Inferring design is done by recognizing complex functional information in objects. That allows us to detect design, but only if and when it is detectable. Functional information is a key concept, and it relies on detecting target spaces that are contingent, functional and highly unlikely in a system where only deterministic and/or probabilistic processes (either non intrinsic or intrinsic) are acting. The recognition of contingent configurations (IOWs configurations that cannot be explained by deterministic laws) that are functional and complex (IOWs that are completely unlikely as a result of random effects due to many independent hidden variables, or even to the intrinsic probability of quantum mechanics) are safe markers of design: the meaningful, intentional intervention of consciousness on matter. OK, that's a summary of my position, as clear as I can make it. Whatever your comments, please make your position equally clear. gpuccio
This seems old and the author unknown, but it's interesting how it argues against ID: Biological Complexity OLV
UB [attn UA], electron orbitals are matters of natural law -- and at cosmological level, fine tuning may be a relevant issue. Such is of course exactly what functionally specific complex organisation and/or associated information is about. Chance-driven stochastic processes may be strictly deterministic but sensitive to a host of uncontrolled factors giving rise to random patterns; I think here of tossing a die that tumbles and settles. Such may also be random in principle as seems to happen with various quantum-linked phenomena like zener noise of sky noise. Chance processes are distributed in config spaces, which statistical thermodynamics tells us will be dominated by relative statistical weights of clusters of microstates. Under such conditions, though high contingency is involved, for complex systems on the scales discussed, the practical observability of FSCO/I on blind chance and/or mechanical necessity is effectively nil. This is due to relative statistical weights of clusters and the predominant group phenomenon behind for instance equilibrium and the statistical form of the second law of thermodynamics, etc. As comments in this thread show FSCO/I as coded textual information (here a linguistic phenomenon) is readily produced by intelligently directed configuration. The observation base is beyond a trillion. That is there is a highly reliable inference from FSCO/I to design as key causal factor. KF kairosfocus
OLV@257 and Upright BiPed@254 I hope we're all here to learn from each other. Atlantic OP was already acknowledged. gpuccio@252 / 253 The fact that the double slit waveform is not the normal distribution should tell you this is different than your deterministic system. Nonlin.org
gpuccio@252 / 253 No doubt science is not about knowing for sure. But your “completely deterministic” claim is extreme and not adequately supported in my opinion. Here is wisdom from a guy that uttered stupidities most of the time: “Extraordinary claims require extraordinary evidence”. We don’t have to agree on this. The double slit experiment shows determinism to fail. Think about it: you set up a perfectly deterministic configuration and do the experiment once with particle A ending up at Position A. Then you repeat the experiment with particle B ending up at Position B. Nothing changed in your 100% deterministic setup yet every time you repeat the experiment you don’t know (except statistically) where your particle will end up even if you calibrate your setup to the n-th degree. Double slit is totally different than your normal distribution of outputs in a manufacturing plant (your dice model) where tightening the inputs / set-up results in a tighter output distribution with the theoretical conclusion that perfect inputs / set-up will result in perfect outputs (determinism). The probabilistic aspect of these systems should theoretically come from hidden quantum effects, but for real life setups they come from inputs / set-up variability (chaos theory). Not quite what I was looking for: https://www.schneier.com/blog/archives/2009/08/non-randomness.html and https://www.insidescience.org/news/dice-rolls-are-not-completely-random . And I am pretty sure that’s why we end up with normal distribution all the time: because you have big contributors and small contributors to variability. Nonlin.org
Against my better judgement I’ll make another comment on this thread.
Electron orbit has CSI because it has to be placed in precise energy levels in order to avoid falling into the nucleus. The protons have to be of specific numbers in order to form an element. The protons also have to be bound by precise strong nuclear forces to ensure protons don’t repel and disintegrate the nucleus.
You think an atom “has CSI” because it has to have precise energies and constituents in order to be what it is? That would surely help to explain your confusion on the subject. You might want to keep in mind that when you make statements about a “500 bit threshold” you are talking about the measurement of a description (a specification) encoded in a medium of information. The 'energies and constituents' of an atom is not a description of the energies and constituents of an atom. You are making an (anthropocentric) category error; conflating form with information. It seems clear to me that you intend to dig in your heals and fight to keep the error. Nothing can be done about that. Upright BiPed
uncommon_avles at #267: I think you are right on this point. Good. :) gpuccio
UA, surely, you recognise rounding issues? I have given rounded values. Calling back up my HP50, the direct exponent and log calc give to 4 places 2^500 = 3.2734 *10^150. 15 are available in principle if you want but the point should be clear. I add, log of a number beyond 1 will be positive, and I reported the actual rounded scope of a config space for 500 bits. Also, for 1,000 bits, 1.07*10^301 possibilities is rounded. KF kairosfocus
KF@265 AND GP @ 266 Heh. I will go through your other replies, meanwhile Please make up your mind - is bits threshold +500 or -500?!! -Log2[3.27*10^150]= -499.99. If you try to make it positive with -Log2[3.27*10^-150]= -Log2[3.27/10^150] = 496.58. The correct answer is what I posted above @261 – Log2[3.05×10^-151]=500. uncommon_avles
KF: You are right, of course! :) gpuccio
GP, pardon but my check is 2^500 = 3.27*10^150. KF kairosfocus
uncommon_avles at #261: OK, let's spend our time this way. I hope that someone can benefit from the clarifications. Not you, probably, given your attitude.
It seems ID is restricted only to protein sequences and not to any other events showing CSI:-)
Not at all. I discuss protein sequences, because functional information is easier to measure in them. If you want to compute functional information for other types of structures, be my guest. ID can be applied to any object exhibiting functional information, but of course the difficulties in measuring it are different according to the type of object and context.
Which is precisely what I am challenging. You cannot objectively compute the CSI because the probability of an event/ process (say protein folding) cannot be described as a ratio at all. You need the probability density function with shape, scale and location parameters. Eg if the pdf of an event is presumed to be “Generalized Gamma”, you need to know not just that it is general gamma but also the shape parameters (k, alpha), the scale parameter (beta) and the location parameter ( gamma).
Again, nonsense. Protein folding has nothing to do with the reasoning here. Nor are we computing the probability of the function. What we are computing is the ratio between the target space and the search space, IOWs the probability of finding a sequence that can implement the function by a random walk in the search space of possible sequences. It's all another thing. The target space is the set of the sequences of a certain length that can implement the function as defined. The search space is the total number of sequences of that length that can potentially be reached in the system. The function is only used to generate a binary partition in the search space: sequences that can implement it, and sequences that cannot implement it. Teh search is simply a random walk in the search space of sequences. It has nothing to do with the function, because it is a blind random walk. All unrelated sequences have essentially the same probability to be reached, and therefore an uniform distribution can be assumed. Even if the ditribution is not perfectly uniform, the important point is that the distribution has nothing to do with the function, because it is the distribution of the results of a random walk in a sequence space. IOWs, there exists no distribution that can favor some specifically functional sequence, because the search space has absolutely no information about the function. This is already obvious for the protein sequence space, but it becomes absolutely obvious, beyond any possible doubt, if you consider that the real space where the random walk takes place is the space of nucleotide sequences, that of course can never have any information about protein functionality, because it is only symbolically related to protein sequences, and of course the random walk of random mutations at the level of DNA has absolutely no information about that. So, your rambling about probability distributions has no meaning at all.
10^150 is the universal probability bound, that is where you get the 500 bit threshold from. -Log2[10^-150]=489.28 , More precisely, it is - Log2[3.05x10^-151]=500.
I know very well what the UPB is. But you had said, literally: "It doesn’t matter what you call it. It is just the ratio of presumed probability of an event to 10^150." which has no meaning at all. The ration of the target space to the search space is the probability, which becomes the FI if expressed as -log2. 10^150 is a threshold that we can use to categorize FI as a binary variable (complex: yes/no). Using a threshold to categorize a numerical variable is not a ratio. Your statement was simply wrong and meaningless, and I have corrected it.
Unless you are restricting the CSI to protein sequence and related events alone, probability has to be presumed because processes and events in cell are stochastic.
This is really beyond any understanding. What do you mean? I suspect that you really understand nothing of ID theory. Indeed, your arrogance has the distinct flavour of ignorance. I apply functional information to protein sequences, as explained. It can potentially be applied to any object, biological or not. Probability must always be assessed, not presumed. It can be usually impossible to measure probability exactly, but it can often be measured indirectly, by approximation. That is the case for protein sequences, where conservation through long evolutionary times allows us to estimate functional constraints. The probability dostribution in the sequence space can also be estimated realistically (see the considerations above). The only thing that cannot be understood realistically seems to be your statement. gpuccio
UA, the D/RNA based info-comms system and linked protein synthesis sit at the root of cell based life, and is the heart of the system of the cell as proteins are its workhorse molecules and key technology. This system uses alphanumeric, framed codes -- already, this is language antecedent to and a key causal factor in cell based life -- with start/stop, regulation, interwoven codes, splicing systems and more. In addition, codes for proteins come in deeply isolated clusters in AA sequence space (much less the wider space of C-chemistry!), leading directly to the needle in haystack, islands of function phenomenon. Thus, deep search challenge. For, in many cases, we can readily show that the complexity involved exceeds 500 - 1,000 bits. That threshold is key as at the two ends, we can readily show that the other known source of highly contingent outcomes apart from intelligent action, chance, is impotent to search enough of a configuration space of that scope to be more than an appeal to statistical miracle in the teeth of a readily demonstrated alternative: intelligent, purposefully directed configuration. As we see from the text of your and my comments in this thread. As has already been outlined in-thread, fast organic rxn rates make 10^12 - 14/s a maximum plausible observation rate. 10^17 s is of the order of time since singularity on the usual timeline. Sol sys is ~ 10^57 atoms, mostly H but we can ignore that point of generosity. Likewise, observed cosmos is ~ 10^80 atoms, mostly H then He. Give that many sol system atoms each a tray of 500 coins, flipped every 10^-14 s and use that as a search model. Likewise for cosmos, use 1000 coins each. Or, if you want something more "scientific," try that many atoms of a paramagnetic substance in a weak B field with parallel and antiparallel states. This is a simple model giving state spaces of 3.27*10^150 to 1.07*10^301 possibilities. Add the indices and you see: [a] 57 + 14 + 17 => !0^88 possible observations, a factor of 10^-62 of the space for 500 bits. For the observed cosmos 80 + 14 + 17 => 10^111 possible observations, a factor of 10^-190 of the space for 1,000 bits. Islands of function are thus patently empirically unobservable on blind chance processes, as search possibility rounds down to effectively no search in both cases. You may suggest that there are laws that write in cell based life in terrestrial planets in habitable zones. Fine, you just added a huge quantum of fine tuning to the already formidable cosmological design inference. Or, you may wish to posit a quasi-infinite multiverse. Fine, that then runs into Leslie's deeply isolated fly swatted by a bullet -- LOCAL fine tuning is just as wondrous as global, pointing to a sharpshooter with a tack-driver of a fine tuned rifle. Matters not that some zones on the wall are positively carpeted with flies, what we observe on the logical structure and quantity of physics is not plausible on a blind multiverse hyp. (We SHOULD be seeing a Boltzmann brain world or the like.) The design inference on C Chem, aqueous medium, code using cell based life in a fine tuning world is quite robust, thank you. Regardless of ideologically loaded dismissive rhetoric. And, it puts design in both the world of life from the root up and in the cosmos from the root of reality up. That is what advocates of self-referential, self-falsifying evolutionary materialistic scientism and fellow travellers (panpsychism being the latest to pop up here at UD) face. With those sorts of alternatives on the table, the design inference is a no-brainer, no sweat choice. KF kairosfocus
uncommon alves:
It seems ID is restricted only to protein sequences and not to any other events showing CSI
Then you are ignorant of ID. That means you need to go and educate yourself and then come back to discuss it. And there isn't any evidence that all processes and events in cells are stochastic. If there were we wouldn't be talking about ID. ET
gp @ 260
The point, as even you should have understood at this point, is not being generically “complex”, but exhibiting complex functional information. Is that so difficult to realize? Protein sequences are functionally complex, because they can implement a specific function by their specific sequence.
It seems ID is restricted only to protein sequences and not to any other events showing CSI:-)
Nonsense! ID is based on the objective computation of the functional complexity of an objectively observed function.
Which is precisely what I am challenging. You cannot objectively compute the CSI because the probability of an event/ process (say protein folding) cannot be described as a ratio at all. You need the probability density function with shape, scale and location parameters. Eg if the pdf of an event is presumed to be "Generalized Gamma", you need to know not just that it is general gamma but also the shape parameters (k, alpha), the scale parameter (beta) and the location parameter ( gamma).
????? What do you mean? Do you even understand what you are saying?
10^150 is the universal probability bound, that is where you get the 500 bit threshold from. -Log2[10^-150]=489.28 , More precisely, it is - Log2[3.05x10^-151]=500. Unless you are restricting the CSI to protein sequence and related events alone, probability has to be presumed because processes and events in cell are stochastic. uncommon_avles
uncommon_avles at #255: Of course I have no problems at all in assuming, indeed in firmly believing, that QM explains atomic orbits. It explains them perfectly well, and with an extreme level of precision. I hope we agree at least on thas. That said, you ask:
why would you assume biological processes are far more ‘complex’ at all?
The point, as even you should have understood at this point, is not being generically "complex", but exhibiting complex functional information. Is that so difficult to realize? Protein sequences are functionally complex, because they can implement a specific function by their specific sequence. The sequence of a protein is not dictatetd by any biochemical law in the biological world: it is dictated by the sequence of nucleotodes in the protein coding gene. The sequence of nucleotides, again, is not dictated by any biochemical law: the four nucleotides can exist in any order in DNA. IOWs, sequences (both of nucleotides in a protein coding gene and of AAs in a protein) are fully contingent. Each AA position or nucleotide position is a configurable switch. IOWs, each position can assume any of the 20 (for AAs) or 4 (for nucleotides) values that are available in the biological context. There is no biochemical law that can dictate the sequence. The sequence is merely informational. Therefore, if the sequence we observe is functional (as it is), we can compute a target space and a search space, and compute the specific functional information for that function. Can you see the difference with atomic orbits? Atomic orbits can only be those that the laws of QM dictate. They are as they are, they are quantum wave functions. Math describes them perfectly well. Protein sequences are contingent, and their functionality points to a design inference, exactly like the meaning of my words in this post, or the functionality of bits in a software code. Then you say this strange thing:
ID is based on an individual’s assumption of CSI in a process/ structure as is clear from the atom’s example.
Nonsense! ID is based on the objective computation of the functional complexity of an objectively observed function. There are no assumptions there. And there is no functional complexity at all in the atom example, as explained. Even you should be able to understand that.
If you think some structure has CSI, you concoct some bits higher than 500, if not, you show bits below 500.
This is simply a lie. And a very silly one. Then you say (to UB) this even stranger thing:
It doesn’t matter what you call it. It is just the ratio of presumed probability of an event to 10^150. The presumption is what is under dispute.
????? What do you mean? Do you even understand what you are saying? "the ratio of presumed probability of an event to 10^150"? The only ratio in ID is the ratio of the target space to the search space. That ratio is the probability of finding a sequence in the target space. There is no presumption. And 10^150 is simply an appropriate threshold of complexity for -log2 of that ratio. It is not part of the ratio itself. But I suppose that, at this point, you have lost any reasonable credibility. gpuccio
gpuccio, Thanks. OLV
Upright BiPed (254):
Nonlin, allow me to offer you some advice. Here is how you do this: “GP, thanks for the conversation. You’ve given me some things to think about. Take care”
That seems humble and prudent. OLV
Nonlin.org (244):
OLV@235 Look up “Stanford marshmallow experiment”
gpuccio (246):
The Stanford marshmallow experiment, and similar, are just experiments that measure personality traits. They are not measuring free will, but rather those already existing personality traits that constrain our free choices. There is no way for science to investigate free will.
Nonlin.org (244):
OLV@236 That’s the title of the OP from the Atlantic – see the link.
gpuccio (247):
Excuse my intrusion, but OLV is right. The title from The Atlantic is: “How a Quarter of Cow DNA Came From Reptiles”, which is correct. Your title is: “Did a Quarter of Cow DNA Came From Reptiles?” which is wrong. Therefore, OLV’s friendly suggestion: “shouldn’t it be “come” instead of “came”?” is perfectly correct.
OLV
Mathematics won't excuse you of the category error planted in the middle of your argument. Upright BiPed
GP @242 Interesting! I thought ID considers atom as something supra-natural? quoted from this OP:
The stability—indeed, the very existence—of the atom suggests something supra-natural. But since the materialistic worldview does not allow for that, its adherents were challenged to discover a mechanism by which atomic stability could be maintained. However, instead of making a discovery, they settled for coming up with a term, “quantum confinement,” which is a scientific label describing, rather than explaining, the phenomenon.
If you have no problem in assuming QM explains atomic orbits, why would you assume biological processes are far more 'complex' at all? Electron orbit has CSI because it has to be placed in precise energy levels in order to avoid falling into the nucleus. The protons have to be of specific numbers in order to form an element. The protons also have to be bound by precise strong nuclear forces to ensure protons don't repel and disintegrate the nucleus. Let us not even go into quarks. ID is based on an individual's assumption of CSI in a process/ structure as is clear from the atom's example. If you think some structure has CSI, you concoct some bits higher than 500, if not, you show bits below 500. UB @ 245
Your point is on top of your head. A “bit” is a binary digit; a unit of storage in a medium of digital information. If you start there, you may figure out why your “argument” falls apart the moment you make it.
It doesn't matter what you call it. It is just the ratio of presumed probability of an event to 10^150. The presumption is what is under dispute. uncommon_avles
Nonlin, allow me to offer you some advice. Here is how you do this: "GP, thanks for the conversation. You've given me some things to think about. Take care" Upright BiPed
Nonlin.org: And you have not answered about the dice model. I have described a system that is deterministic, and generates a probability distribution. My point is that the probabilistic aspect of the system derives obviously form the deteministic rules that govern its behaviour, and from the great number of independent variables. Are you denying that? Do you really believe that the probabilsitic aspect of the system derives from hidden quantum effects? Please, answer that. It's a simple question. gpuccio
Nonlin.org: I have discussed my ideas about free will here at UD many times. I don't know if it's the case to start a vast discussion about that here with you. Your position about science is really unacceptable. Of course empirical science is an approximation. Whoever knows a little about philosophy of science is well aware of that. Science is not about knowing for sure. Therefore, your criticism that we don't know for sure is completely irrelevant. Of course we don't know for sure. Indeed, all human cognition is not about knowing for sure, including your personal ideas. But for science that is particularly true. And so? Science is extremely useful and powerfull, even if it "does not know for sure". Knowing for sure is not a real requirement, unless we have personality problems. Science is about the best explanation, and best explanations are a really useful, precious thing. I stick to them, and I have never had reasons to complain. Complete accuracy is, of course, irrelevant. It is a myth, it cannot exist in the real world. Science is based on measurements, because only measurements allow us to make quantitative theories. But, of course, no measurement is ever completely accurate. Error is implicit in measurement. But that is not a problem, because error can be measured too, and that makes measurements reliable in their appropriate context, if error is small enough. Chaos theory is of no help to your position. From Wikipedia: "Chaos theory is a branch of mathematics focusing on the behavior of dynamical systems that are highly sensitive to initial conditions." Chaotic systems are, of course, completely deterministic. It's their special features, and the math that describes them, tha makes them "sensitive" to initial conditions: it's because they are deterministic that small errors in the measurement of initial conditions make the outcome very different. They are practically unpredictable, but only because we know perfectly how to predict them, and we know very well the reason why small errors in the initial measurements will make the prediction very different. They are the demonstration of determinism. The appropriate math perfectly understands and describes them. And you still misunderstand the double slit experiment, and all quantum mechanics. When a photon is absorbed, be it at the screen or on the retina of your rats, it is absorbed as an individual particle, not as a wave. IOWs, it has position and other properties of an observed particle. It is no more a quantum wave function. Therefore, what's the problem with your scenario? Rats of course react to the photon if it is there, on their retina, and don't react if it is not there. this is classical physics. We are dealing with collapsed wave functions. There is no more probability, of any kind. The response of the rats is fully deterministic. However, if you have no new arguments, we can stop it here. As said, I hate to repeat the same things many times. Please, let me know if you are interested in discussing my model of free will. gpuccio
gpuccio@239, You’re just not getting the point on “height”, and it’s an irrelevant side argument anyway, so I’ll stop here. We study will power – see “Stanford marshmallow experiment”. How is that predictable behavior? You already replied but I don't agree. How do you separate free will from personality traits? And how can you prove personality traits constrain free will? On determinism: 1. Newtonian mechanics is an approximation – it doesn’t give you certainty and it doesn’t take into account the quantum effects. Yes, most of those quantum effects cancel each other for large objects, but your statements are 100% categorical and that’s not right as you just don’t know FOR SURE. 2. We can send out space vehicles because they autocorrect their trajectory (negative feedback) and they require finite precision. Also look-up positive feedback. For purely positive feedback systems the output will always be at one unpredictable extreme or another regardless of how precisely the input is controlled. 3. Look up chaos theory: “The butterfly effect presents an obvious challenge to prediction, since initial conditions for a system such as the weather can never be known to COMPLETE ACCURACY.” 4. You dismissed my thought experiment without a proper explanation: “Say you have a double slit experiment and on the other side a number of scared rats that can see one photon (can they? Humans can) and take off in fear in different directions knocking down one domino set or another. That’s your quantum impact on macroscopic events.” You should review the double slit experiment. If you send a large number of photons, you will see the wave function on the screen. But if you send only a couple, you don’t know where they land on that wave function. And if you take a decision based on where these photons landed, you have your non-determinism. @236, @247 Right. I corrected the title. Thanks. Nonlin.org
Truth Will Set You Free: Thank you, you are too kind! :) I just try to keep my ideas clear about ID theory. All the merits belong to the theory itself! :) gpuccio
Your point is on top of your head.
Question-begging. ;) ET
gpuccio @ 243: That was good. A living legend you are. Truth Will Set You Free
Nonlin.org (and OLV): Excuse my intrusion, but OLV is right. The title from The Atlantic is: "How a Quarter of Cow DNA Came From Reptiles", which is correct. Your title is: "Did a Quarter of Cow DNA Came From Reptiles?" which is wrong. Therefore, OLV's friendly suggestion: "shouldn’t it be “come” instead of “came”?" is perfectly correct. gpuccio
Nonlin.org: The Stanford marshmallow experiment, and similar, are just experiments that measure personality traits. They are not measuring free will, but rather those already existing personality traits that constrain our free choices. There is no way for science to investigate free will. gpuccio
UA,
My point is quite simple
Your point is on top of your head. A "bit" is a binary digit; a unit of storage in a medium of digital information. If you start there, you may figure out why your "argument" falls apart the moment you make it. Upright BiPed
OLV@235 Look up "Stanford marshmallow experiment" OLV@236 That's the title of the OP from the Atlantic - see the link. Origens@238 You stopped making any sense long time ago and I am tired of your nonsense. Maybe some other time. Nonlin.org
Truth Will Set You Free: uncommon_avles has no point, and cannot defeat anything. See my comment #242. gpuccio
uncommon_avles at #232:
So isn’t this 500 bit threshold just a farce ?
No. Your "argument" is a farce. Of course, there is absolutely no functional information in electronic clouds. They are simply determined by the laws of physicis, in particular quantum mechanics. If you understood ID theory, you would know that all configurations that can be explained by necessity (known laws) are not valid specifications. That is quite clear in Dembski's explanatory filter. See also my OP about functional information: Functional information defined https://uncommondescent.com/intelligent-design/functional-information-defined/ The issue is debated in the discussion of that thread: #48, #57, #68, #135. Functional information is about the bits of specific information that are necessary to implement a function, and that are introduced into the object by setting "configurable switches", IOWs configurations that are possible in the search space, but not constrained by known laws (see also Abel). Electronic clouds are not configurable switches: they are determined by laws, and they cannot implement any functional information. gpuccio
uncommon alves @ 232: Your point might defeat the 500-bit threshold argument but it wouldn't defeat the general argument for ID. Correct? Truth Will Set You Free
uncommon alves:
At atomic level magnification since everything has complex atomic structure and “impossible” rotation of electrons in different orbitals (probability cloud) around nucleus, “impossible” existence of leptons and gluons, obviously everything on earth will be above the ID’s 500 bit threshold.
So everything is an artifact? All deaths are murders? Really? ET
Nonlin.org: I don't want to repeat my arguments. You can think as you like. You are accumulating a series of meaningless statements: "Hopefully, no one would draw any medical conclusions based on height alone." Yes, and so? "The point was that it’s a convention to measure the way we do." A convention? We measure human height according to its definition. "We may infer the same (or better) statistical deviation from say “extended arms height”, total volume, weight (which is being measured) etc." Those are different variables. Weight is different form height. Deviations from expected weight have a different medical meaning. These are all issues well analyzed in Auxology, a scientific discipline that you seem to ignore. "I don’t know that this is true either: “all science studies deterministic effects”, since we do study free will." There is no way to study free will scientifically. It is a merely philosophical issue, well beyond the boundaries of science. Because it is connected to the transcendental nature of the conscious "I". Even human sciences, like psychology or sociology, cannot study free will. They can study human behaviour, but they really deal with those parts of human behaviour that are predictable, and therefore are not a model of free will. OK, I will make a last attempt at clarifying the issue of determinism and probability. I will make a very simple example, but please answer precisely to my questions, because your position has remained vague and undefined, up to now. Let's go again to rolling dice. Just to avoid any distraction, let's avoid any intervention of conscious agents. So, we have an automated system, which can toss a die repeatedly. The system has many uncontrolled variables: the die falls from above, in variable positions, and then a spring tosses it in the air, where its trajectory is practically unpredictable. It could even bounce on the walls of the system, always practically unpredictable. The system makes 10000 tosses, and the results are recorded. In the end, the six possible outcomes are distributed very much in accord with the expected uniform distribution, confirming that such a distribution, with a probability of 1.6666.. for each independent outcome, describes very appropriately the system. Now, my point about determinism is very simple. The trajectory of each single toss is completely deterministic. Why do I say that? Because, of course, we know from Newtonian mechanics that the trajectories of physical objects can be accurately described by the laws of mechanics, with an extremely high degree of precision, considering the forces that act on the object, its initial position, the mass and shape of the object, the gravitational field, friction, and so on. If those deterministic laws were not so precise, we could not send our space vehicles anywhere. Those same laws that determine the trajectory of a space vehicle equally determine the different trajectories of our dice. Those trajectories are completely deterministic, each of them. And yes, 10000 different trajectories, determined by the same laws but with different values of the involved variables, generate a very correct random distribution of the outcomes: each outcome remains practically unpredictable, but the set of ourcomes obeys precise probabilistic laws. So, what are you denying in the above reasoning? Are you denying that the movement of a physical object, and its tragectory, are determined by the laws of classical mechanics? Are you invoking quantum effects? Are you saying that scientific laws are meaningless? Please, be clear and precise, as I have tried to be. Otherwise, we can stop our discussion here. gpuccio
Nonlin @234
…“could be random” – … is downright dumb… especially in statistics. Sure, anything “could be random”.
Wrong again. Not anything can be random. Spaceships, jet airplanes, nuclear power plants, libraries full of science texts and novels and supercomputers running partial differential equation solving software do not come about by random processes. You do not understand the design inference. In fact, you are continually missing the whole point.
If “a process is not random”, that tells you something about “randomness”?!? Wow!
Sure, it tells us that the randomness of the process borders zero. Why are these simple matters so difficult for you to understand?
What the heck can “lawful regularities ‘in’ multiple observations” mean? Total nonsense.
It means that one grasps the regularities by comparing multiple observations. By doing so one can 'see' the effects of the law. This is pretty basic stuff Nonlin …
We will never know ‘gravity’ means that unless God tells us “this is gravity”, we can never be 100% sure.
How would you know? Reference please.
“non-random can be design or law”. What the heck is “law” and how is it different than ‘design’? Presumably not something coming from the politicians.
You are making less and less sense. Read #208 & this article by Paul Davies. Origenes
Off topic: The DNA Data Deluge Biology's Big Problem: There's Too Much Data to Handle Big biological impacts from big data Big Data: Astronomical or Genomical? OLV
Nonlin.org, I took a quick look at your interesting website. In the title of this article: http://nonlin.org/cow-reptiles/ shouldn't it be "come" instead of "came"? Thanks. OLV
Nonlin.org (233):
I don’t know that this is true either: “all science studies deterministic effects”, since we do study free will.
"we do study free will" 1. "we"? Who? 2. Does "we" = "science"? Thanks. OLV
Origenes@230 It's not "could be" but "could be random" - which is downright dumb... especially in statistics. Sure, anything "could be random". If "a process is not random", that tells you something about "randomness"?!? Wow! What the heck can "lawful regularities ‘in’ multiple observations" mean? Total nonsense. We will never know ‘gravity’ means that unless God tells us "this is gravity", we can never be 100% sure. Not that you can understand... "mixing up descriptions and/or observations with the thing itself" - no, you're mixing stuff you can confirm with stuff you just imagine you understand. "non-random can be design or law". What the heck is "law" and how is it different than 'design'? Presumably not something coming from the politicians. Nonlin.org
gpuccio@229 Hopefully, no one would draw any medical conclusions based on height alone. The point was that it’s a convention to measure the way we do. We may infer the same (or better) statistical deviation from say “extended arms height”, total volume, weight (which is being measured) etc. We’re looking for statistical deviation, not for height specifically. You said: “all phenomena that we describe as random are completely deterministic”. And your answers to “how would you know?” were inadequate. At a minimum you should be warry of such categorical claims. I don’t know that this is true either: “all science studies deterministic effects”, since we do study free will. None of your quotes negates my statement: “Last I checked, Randomness that comes from deterministic systems is called pseudo-random”. You keep insisting but have no way to prove: “When we roll dice, the result is fully determined by the laws of mechanics and of classical physics. The same can be said for lottery drawings” …and “But those systems are deterministic”. The wave function is just the mathematical probability function so cannot be deterministic more than say a circle is "deterministic". But individual events are unpredictable. Nonlin.org
gpuccio @ 223
All my examples here are at the protein sequence level. If I understood your “point” about electrons, I would certainly answer.
My point is quite simple – ID’s 500 bit threshold to determine if something is made by an agency depends on the magnification you use to examine a process/ object. At lower magnification (when “complex” cell mechanisms were not know by scientists),a process like combining of cells would not be above ID’s 500 bit threshold. At atomic level magnification since everything has complex atomic structure and “impossible” rotation of electrons in different orbitals (probability cloud) around nucleus, “impossible” existence of leptons and gluons, obviously everything on earth will be above the ID’s 500 bit threshold. So isn't this 500 bit threshold just a farce ? uncommon_avles
gpuccio (225):
All transcription regulations, indeed all forms of regulation, are most probably complex, and can in principle be analyzed by ID theory.
That's clear. Thanks. OLV
Nonlin @219, 227
For “could be” to have any value, you must attach a probability.
Surely not. For science “could be” is important information on its own.
It could still be random with almost zero probability. We generally take that as “not random”.
Indeed. So, an outcome can tell us, with a probability bordering on certainty, that a process is not random. This renders your claim that the outcome tells us “nothing” about randomness bunk.
Not descriptions of “laws” but descriptions of ‘observations’.
Wrong again. Descriptions of observations do not amount to a description of laws. One has to ‘see’ lawful regularities ‘in’ multiple observations, in order to describe the laws.
That’s a huge difference you keep missing. And yes, we call these descriptions of ‘observations’, “laws”. Get it?
No, because nonsense is incomprehensible. Descriptions of observations do not amount to a description of laws.
We will never know ‘gravity’ …
What does that even mean? Do you mean that we will never come up with an accurate description of gravity? How would you know?
… and one day a black swan shows up and we call that “black matter”. But maybe there is no “black matter” and in fact “the law” needs to change.
There you go again … mixing up descriptions and/or observations with the thing itself. We need to change our description of the law; we cannot change the law itself.
While non-random (design) can easily look random ….
FYI non-random does not equal design — non-random can be design or law. - - - - -
Nonlin (to GPuccio): You’re being exposed to ideas you’ve not seen before, so your negative reaction is totally understandable.
ROFL Origenes
Nonlin.org: I am rather accustomed to "ideas I’ve not seen before". Believe me. But they must gain my interest for their merits. I find merit in some of the things you say. But not in many others.
a) Inert objects also have heights.
They certainly have dimensions. Height, in my context, was obviously used for humans. It's not clear what is your point about objects having heights. I think there is no point at all.
b) To your examples, the information comes from statistical deviation, not from “height”. If your subject is a midget, pygmy, of different age, or a turkey, you won’t draw any conclusions from “height”.
Nonsense. The information comes from how much height deviates from a reference population. Of course the reference population must be appropriate. Firts of all, height is usually expressed for age groups. For children, you have very exact percentiles for age. If you are a "midget", whatever you mean, you could be affected by a specific disease. If you are a tuekey, or a pigmy, you should use reference charts for turkeys or pigmys. That's how science is done. The pertinent field is called Auxology.
How can you say “everything is deterministic” AND “Free Will interferes with determinism”?
I have never said that everything is deterministic. What I have said is: "I insist so much on determinism because, of course, all science studies deterministic effects. Either directly, or in probabilistic form. Deterministic they are, just the same." Of course free will is not deterministic. But the systems studied by science, either by strict necessity or by probability distributions, are deterministic.
Last I checked, Randomness that comes from deterministic systems is called pseudo-random: https://www.random.org/.
You don't even understand the pages you quote. From that page:
In reality, most random numbers used in computer programs are pseudo-random, which means they are generated in a predictable fashion using a mathematical formula.
IOWs, most simple softwares that generate random numbers in a computer do that through rather simple algorithms, and the result is predictable, even if it has some properties of a probabilistic distribution. That's why they are called "pseudo-random", not because the system is deterministic, but because the system is a rather simple algorithm, and it cannot really imitate the huge number of variables in a true natural deterministic system that generates a probabilistic distribution. Again from the page:
This is fine for many purposes, but it may not be random in the way you expect if you're used to dice rolls and lottery drawings. RANDOM.ORG offers true random numbers to anyone on the Internet. The randomness comes from atmospheric noise, which for many purposes is better than the pseudo-random number algorithms typically used in computer programs.
IOWs, they are using a more sophisticated way of generating random numbers, a way that is more similar to natural contexts. As clearly stated, dice rolls and lottery drawings are still the best models of random distributions. And, of course, they are fully deterministic systems. When we roll dice, the result is fully determined by the laws of mechanics and of classical physics. The same can be said for lottery drawings. The random effect is simply due to the fact that we cannot predict the result, or control it. Simply because there are too many variables. Like in human height. But those systems are deterministic. There is no effect of free will in them.
Also, “Wave function collapse” is only one interpretation in quantum mechanics.
I know, and so? If you are a fan of hidden variables, that is even worse for your position. However, nobody really doubts that the wave function is fully deterministic, which was my point. gpuccio
gpuccio@215, You’re being exposed to ideas you’ve not seen before, so your negative reaction is totally understandable. This thread is not about height so, to be brief: a) Inert objects also have heights. b) To your examples, the information comes from statistical deviation, not from “height”. If your subject is a midget, pygmy, of different age, or a turkey, you won’t draw any conclusions from “height”. We’re not reaching any conclusions on determinism. How can you say “everything is deterministic” AND “Free Will interferes with determinism”? I am not obsessed with anything – just trying to understand your point and what makes you so sure. Last I checked, Randomness that comes from deterministic systems is called pseudo-random: https://www.random.org/. Also, "Wave function collapse" is only one interpretation in quantum mechanics. Nonlin.org
Origens@213 Following my comment @219, what trips you is the asymmetry between random and non-random. While non-random (design) can easily look random, it's almost impossible for random to look non-random for anything larger than a few bits. Data communication systems do their best to output random-like data for protection and for communication efficiency. On the other hand, 'infinite monkey' experiments have and will always fail: https://en.wikipedia.org/wiki/Infinite_monkey_theorem kairosfocus@222 Did you mean to reply to someone else? While I might agree with your argument, I find it cumbersome, hence not persuasive. As I mentioned before, probabilities get extreme very fast so you don't need to distill the ocean (or, in this case, the universe). See my comment 219 and the whole discussion with Origens. I am hopeful he'll get it this time :) Nonlin.org
KF: Yes, thank you for #220 and #222: very clear, as usual. :) gpuccio
OLV: "Could those states be associated with ID theory too, even if they are not straightforwardly quantifiable (at least at this moment)?" Yes, of course. All transcription regulations, indeed all forms of regulation, are most probably complex, and can in principle be analyzed by ID theory. Of course, there are certainly problems in quantifying the functional information, and that's why I stick to protein sequences. gpuccio
kairosfocus (220,222): Very interesting comments. Thanks. OLV
uncommon_avles: All my examples here are at the protein sequence level. If I understood your "point" about electrons, I would certainly answer. gpuccio
Nonlin, actually, probability, plausibility, needle in haystack search challenge and linked themes take on importance long before we get to scales and values on probability models. For instance, we can readily show that 3-d functional organisation can be reduced to description languages, e.g. Autocad etc and in the end structured Y/N chains. We can then ponder a von Neumann replicator with a constructor that reads and effects the codes. From this, we can see that a coherent functional entity can be identified and we can play with the config space for components and for assembly. It is not hard to see that function comes in deeply isolated islands in the space of possibilities. A 500 - 1,000 bit string has 3.27*10^150 to 1.07*10^301 possibilities. It is easy to see that 10^57 atoms changing at 10^13 to 10^14 states/sec or 10^80 at similar rates (fast for organic type reactions) will only be able to sample very small fractions of such config spaces in 10^17 s, about the timeline from a big bang. To appeal to blind chance and/or mechanical necessity is then a futile strategy to explain FSCO/I -- an appeal to a long chain of statistical miracles. And already, just on D/RNA and protein synthesis where we have six basic bits per three base codon and 4.32 bits per AA in a protein, we are utterly beyond the relevant threshold. There is just one empirically warranted, analytically plausible explanation for FSCO/I rich systems, design. And the rhetorical gymnastics exerted to duck that only inadvertently underscore the strength of that design inference to best explanation on tested, reliable sign. Where, no this is not appeal to incredulity, it is inference to best explanation anchored on massively evident empirical facts and linked analysis as outlined. The selective hyperskepticism and turnabout projection so many objectors resort to to dodge an inference supported by a trillion member observational base speak volumes. KF kairosfocus
gpuccio (217):
DNA and chromatin states are certainly a major component of transcription regulation, and probably still the least understood.
Very interesting statement. Could those states be associated with ID theory too, even if they are not straightforwardly quantifiable (at least at this moment)? Thanks. OLV
UA, ever took apart a fairly simple mechanical contrivance such as a fishing reel? Notice, how it is made of arranged, coupled parts that work together to achieve function? Where, parts use materials, and so forth? Now, apply to the body plan and associated structures. The same obtains, and a logical first answer is to parts, wholes and to the assembly-coupling process. That is a commonplace, not hard to see; and yes there is fuzzyness around the edges of concepts, scales etc but not enough to twist the point into the meaninglessness you seem to want to get to. Now, go to the cell, considered as a body plan in its own right. We now have organelles, molecules, membranes and so forth. Molecular nanotech parts. Much of this turns on AA sequence chains, folding and assembly, most famously with the flagellum. Parts, assemblies, wholes. Next ponder D/RNA and info coding, here we see parts, assembly, wholes that use framing techniques. Nobel Prize level work identified codes and we have seen associated machinery that fits with the classic info system model, as say Yockey pointed out. All of this, despite fuzziness. KF kairosfocus
Origens@213
There is a process A which can be random or not — we do not know. Now the outcome of process A can tell us two things: 1. The outcome can be consistent with a process A being random, in which it tells us that process A could be random. 2. The outcome can be inconsistent with a process A being random, in which it tells us that process A cannot be random.
1. It could be or it may not be as shown. Or it could be a combination as in "only 1 to 6 outcomes w. uniform distribution - see dice". Therefore it doesn't tell you "it is". For "could be" to have any value, you must attach a probability. And you can't because any random sequence can also be non-random generated! There is no such thing as: "given this outcome, there's an X % probability the process is random". Check you statistics book! Get it? 2. It could still be random with almost zero probability. We generally take that as "not random".
As I explained to you already, Newton laws are, in fact, descriptions of laws. They are not the laws ‘an sich’. nonline: … an example of overturned “laws”. Overturned descriptions of laws. Gravity itself was not overturned.
Not descriptions of "laws" but descriptions of 'observations'. That's a huge difference you keep missing. And yes, we call these descriptions of 'observations', "laws". Get it? We will never know 'gravity' but we will have 'observations' consistent with 'gravity'... and one day a black swan shows up and we call that "black matter". But maybe there is no "black matter" and in fact "the law" needs to change. Nonlin.org
At what level of magnification does ID’s 500 bit complexity test start and stop? By ID standards, everything is complex because at atomic level obviously the electron can’t exist in the probability cloud. It should have fallen into nucleus, right? uncommon_avles
OLV at #214: Interesting. :) DNA and chromatin states are certainly a major component of transcription regulation, and probably still the least understood. gpuccio
To all: I have just posted a comment on the Ubiquitin thread. It is pertinent to the discussion here, too (see the part about E3 ligases in the OP). So, I paste it here too:
The fact that different E3 ligases can interact with the same substrate has been presented by our kind friends from the other side as evidence of their "promiscuity" and poor specificity. Of course, I have pointed to the simple fact, supported even by the authors of the paper they referred to, that different E3 ligases could bind the same substrate, but in different contexts. Therefore, that is a sign of extreme specificity, not of promiscuity. See comment #834 here. This is the relevant statement from the quoted paper:
Significant degrees of redundancy and multiplicity. Any particular substrate may be targeted by multiple E3 ligases at different sites, and a single E3 ligase may target multiple substrates under different conditions or in different cellular compartments. This drives a huge diversity in spatial and temporal control of ubiquitylation (reviewed by ref. [61]). Cellular context is an important consideration, as substrate–ligase pairs identified by biochemical methods may not be expressed or interact in the same sub-cellular compartment.
Well, here is a brand new paper that shows clearly how different E3 ligases target the same substrate at different steps of the cell cycle, and with different functional meaning. The "huge diversity in spatial and temporal control of ubiquitylation" is here clearly demonstrated. The HECT-type ubiquitin ligase Tom1 contributes to the turnover of Spo12, a component of the FEAR network, in G2/M phase. April 23, 2018 https://www.ncbi.nlm.nih.gov/pubmed/29683484
Abstract The ubiquitin-proteasome system plays a crucial role in cell cycle progression. A previous study suggested that Spo12, a component of the Cdc fourteen early anaphase release (FEAR) network, is targeted for degradation by the APC/CCdh1 complex in G1 phase. In the present study, we demonstrate that the Hect-type ubiquitin ligase Tom1 contributes to the turnover of Spo12 in G2/M phase. Co-immunoprecipitation analysis confirmed that Tom1 and Spo12 interact. Overexpression of Spo12 is cytotoxic in the absence of Tom1. Notably, Spo12 is degraded in S phase even in the absence of Tom1 and Cdh1, suggesting that an additional E3 ligase(s) also mediates Spo12 degradation. Together, we propose that several distinct degradation pathways control the level of Spo12 during the cell cycle.
So, we have: a) One target: Spo12 b) Three different functional moments: - G1 phase: control implemented by the APC/Ccdh1 E3 ligase - G2/M phase: control implemented by the Tom1 E3 ligase - S phase: control probably implemented by addirional E3 ligase(s) One substrate, three different functional contexts, three different E3 ligases: this is specificity at its best! :)
gpuccio
Nonlin.org: Look, you are free to think as you like. But it is really difficult to discuss if your arguments are of the kind: "height is not even a proper biologic measure because height changes all the time, not just during development and because it is arbitrarily determined" !!! What do you mean? Height is not a proper biologic measure??? We measure height in all kinds of populations (OK, in neonates we measure length! :) ). The values are gathered according to age, and means and percentiles and all kinds of statistical parameters can be derived from those values. If a child deviates strongly from the expected height curve, a disease can be suspected, and often demonstrated. Growth hormone deficiency is one of the most common cases. Isn't that deterministic? That height has a strong genetic component, of the polygenic type, is well known and well demonstrated. This is determinism, with an outcome that is influenced by many different variables (including those not genetic, which are as deterministic as the genetic), and therefore can best be described probabilistically. Which is my point. Or would you deny that serious nutritional deficiency can affect growth? Is that a quantum effect, in your opinion? I insist so much on determinism because, of course, all science studies deterministic effects. Either directly, ot in probabilistic form. Deterministic they are, just the same. You seem obsessed by the strange idea that randomness is something different from determinism. That idea is completely wrong. Randomness is only a form of determinism, where we cannot analyze the variables in detail. Even if quantum probability wee intrinsic, it is connected to determinism just the same: the probabilities of observed measures are dictated with extreme precision by the wave function, and the wave function is a completely deterministic reality. Your example of quantum effects on the macroscopic world is simply wrong. We all react to things that, in some way, derive from quantum effects that have "collapsed". A table is a repository of quantum effects that have collapsed, and therefore we perceive it as a solid and stable reality. Quantum reality is different from traditional physics at the level of the wave function, before we observe it or measure it. We, like the rats, can certainly react to quantum wave functions that have become measurable things, with specific directions and positions and so on. IOWs, they can be descriebed by traditional, fully deterministic physics. Superconductivity, instead, is an example of a macroscopic system where quantum behaviours can be demonstrated. Again, you can believe as you like, but your ideas are, very simply, a denial of science and of all that we know. I will not follow you there. The only thing we seem to agree about is that: "Yes, consciousness (don’t you mean Free Will?) would interfere with determinism for sure." That's absolutely true! :) gpuccio
Off topic: https://www.nature.com/articles/s41557-018-0046-3 OLV
nonlin: The outcome only tells you something about a random process if you already assume the process is random.
Of course not. What's wrong with you? There is a process A which can be random or not — we do not know. Now the outcome of process A can tell us two things: 1. The outcome can be consistent with a process A being random, in which it tells us that process A could be random. 2. The outcome can be inconsistent with a process A being random, in which it tells us that process A cannot be random. Either way, the outcome certainly tells us something about the randomness of process A — contrary to your false claim that it tells us nothing. ----
nonline:Newton’s laws of physics are wrong at the atomic level .
As I explained to you already, Newton laws are, in fact, descriptions of laws. They are not the laws 'an sich'.
nonline: ... an example of overturned “laws”.
Overturned descriptions of laws. Gravity itself was not overturned. As GPuccio wrote:
Our laws are our way to describe those regularities. Let’s say that they are huma approximations of the “real law” that acting in nature. The idea is that our human approximations can certainly change and be made more precise, but there is no reason to believe that “the real law” changes at all.
Origenes
gpuccio@209 Ok, so the Mendelian trait example is a classic. But height is not even a proper biologic measure because height changes all the time, not just during development and because it is arbitrarily determined. Just as well you can sort by vertical reach or eyes height (on or off tiptoes), etc. – these can be more important for survival than the standard measurement and will throw off your statistics. Also food/climate/parasites during development affect size at maturity. And when exactly is maturity? Again, you assume but do not prove (how could you?) that height is deterministic. Yes, it can be described statistically – I never claimed otherwise. So what? Sorry, I just don’t see the “determinism” claim being well supported. Why do you insist so much on determinism? How would you know “regarding macroscopic objects, the effects of quantum events are not detectable”? Say you have a double slit experiment and on the other side a number of scared rats that can see one photon (can they? Humans can) and take off in fear in different directions knocking down one domino set or another. That’s your quantum impact on macroscopic events. Yes, consciousness (don’t you mean Free Will?) would interfere with determinism for sure. Agree, we don't have to solve any of these today :) Nonlin.org
Origens@208 What are you talking about? Newton's laws of physics are wrong at the atomic level - an example of overturned "laws". And "Central dogma of molecular biology" has also been proven wrong. All "laws" are formulated by humans based on their limited knowledge at the time. Who knows what else we will discover next that will overturn "the current laws"? You know next to nothing about gravity, so how would you know there even is such thing as gravity? Your cosmologists might as well be astrologists. They're no good for anything other than making up ridiculous nonsensical stories for the uninformed. Here's an insider exposing their nonsense: https://backreaction.blogspot.com/ Nonlin.org
Origens@207 You did not "explain" anything as you are terribly confused. Per your example, no one will convict on the basis of: “cannot exclude the possibility of random production”. The outcome only tells you something about a random process if you already assume the process is random. And that is circular logic. This repeat conversation is getting boring. If you don't understand, so be it. I am done. Nonlin.org
Nonlin.org: Your fifth point: e) You say:
e) We’re assuming the designer is invisible and you can only determine design based on the output. Yes, something can be designed to look random, but I don’t think we’re concerned with that scenario. Fact is, we see a lot of patterns that are clearly nonrandom especially in biology. In fact I can’t think of anything in nature that can be attributed 100% to randomness. Even the atmospheric noise used as random generator has a deterministic component in its statistics and boundaries. Yes, I got your comment 112. You refer to chaos theory. That’s compatible with my “randomness is ONLY a theoretical concept”, so why do you disagree, and how would you prove it wrong? I don’t agree with “all phenomena that we describe as random are completely deterministic”. How would you know? And if quantum events are an exception, and of course all particles are subject to quantum events, then of course all systems are NOT deterministic.
I think that the answers to these points have already been given. You say: "I don’t agree with “all phenomena that we describe as random are completely deterministic”. How would you know?" We can onserve all the time deterministic settings that produce outcomes that are well described by probability distributions, and cannot be described. Many deterministic variables that act independently generate probabilistic distributions. Look, I will give you a simple example from genetics. If we have a recessive trait, such as beta thalassemia, and we have two parents who want to have a child and who are both heterozygous for the trait, and you are giving genetic counseling, you cannot tell them if their future child will be healthy, heterozygous for the trait, or affected by the disease. Nobody can know that before conception, indeed only after some time a prenatal diagnosis will be possible. But you can tell the parents about probabilities. Their children, if they had a lot of them, would more or less be distributed according to a very simple probability distribution: 0.25% healthy, 0.5% heterozygous, 0.25% homozygous (with the disease). Because this is a mendelian trait. Now, let's take a more complex trait: height. We know very well that there is a very verifiable relationship between the height of the parents and the height of the children. But this is not a mendelian trait. It is a polygenic trait, one that is probably influenced by hundreds of different, independently transmitted genes. Moreover, non genetic factors, like nutrition, or diseases, are also implied in the final outcome. All of these factors, the hundreds of independent genes, and non genetic factors, all act deterministically to cause the height of each individual. But we cannot compute what the future height of an individual will be, because we don't know all those variables. Still, the influence of the parents'height can be factores, and it gives some useful information. What happens when a variable like height is controlled by so many independent deterministic factors? It's interesting. What happens is that the vairable, in a poopulation, assumes a normal distribution. That's exactly what happens with height, and with many other similar biological variables. The normal distribution is just a mathematical object. And yet, it is the best tool that we have to describe, and analyze, this type of biological variables. These are just examples of how we know that deterministic systems can generated distributions of outcomes that are best described by probability distributions. Which is exactly my point. Your final argument is more interesting. You say: "And if quantum events are an exception, and of course all particles are subject to quantum events, then of course all systems are NOT deterministic." Not exactly. First of all, quantum realities are essentially deterministic, because the wave function, that is the essential component of quantum theory, is completely deterministic. But, of course, there is also a probabilistic componemt, what is usually called the "collapse of thw wave function". These are all very controversial issues, as you probably know, at lest in their intepretation. But my point is much simpler. It is true that everything that exists is, first of all, a quantum reality. But, when we describe macroscopic reality, qunatum effects, in particular the probabilistic collapse of the wave function, can absolutely be ignored. Why? Because they are irrelevant, for all purposes. At the level of classical physics, regarding macrosopic objects, the effects of quantum events are not detectable. The probabilistic effects become necessity laws, and those laws work wiht remarkable precision and efficiency. At the level of particles, instead, quantum effects are extremely important. There are a few exceptions to this rule: there are macroscopic systems where quantum effects are important, and perfectly detectable. See, for example, this Wikipedia page: Macroscopic quantum phenomena https://en.wikipedia.org/wiki/Macroscopic_quantum_phenomena But these are exception. The rule is that almost always quantum effects have no importance at macroscopic level. And the behaviour of macroscopic objects is deterministic, for all practical purposes. The ointerventions of consciousness on matter are a possible, interesting exception. If, as I (and many others) believe, the interface between consciousness and matter is at quantum level, that would allow the action of consciousness to modify matter without apparently interfering with gross determinism. That would also explain how design takes place. But that is another story! :) gpuccio
Nonlin: Nature doesn’t come with laws – they are written by humans based on what we observe. In addition, we continue to rewrite “the laws” based on new observations.
I strongly disagree, nature does come with laws. Nonlin’s second sentence reveals that he simply fails to distinguish between the law ‘an sich’ and our description of it. Surely, we did not write gravity, but, obviously, we attempt to describe it. Moreover, there is no ‘bottom-up’ explanation of the laws, as theoretical physicist Paul Davies wrote:
Physical processes, however violent or complex, are thought to have absolutely no effect on the laws. There is thus a curious asymmetry: physical processes depend on laws but the laws do not depend on physical processes. Although this statement cannot be proved, it is widely accepted.
If A does not depend on B, then A cannot be explained by B. Put another way: if the laws are explained bottom-up by fermions and bosons, then we would expect the laws to be prone to change — different circumstances different laws. But this is not what we find.
Cosmologist Sean Carroll: There is a chain of explanations concerning things that happen in the universe, which ultimately reaches to the fundamental laws of nature and stops… at the end of the day the laws are what they are…
Translation: We have no explanation for the laws. They are truly ‘fundamental’. We don’t know where they come from, we don’t know where they are, we don’t know how they cause things to happen.
Cosmologist Joel Primack: What is it that makes the electrons continue to follow the laws?
Indeed, what power compels physical objects to follow the laws of nature?
Paul Davies: There has long been a tacit assumption that the laws of physics were somehow imprinted on the universe at the outset, and have remained immutable thereafter.
Origenes
nonlin: So you agree one “cannot say FOR SURE” (rephrased as “cannot exclude the possibility of random production”) but then claim I am wrong?
Yes, of course. Your claim that the outcome tells us NOTHING about the randomness of the production is 100% wrong. I have explained why this is so in #115, #137, #143 and #162. If solid evidence points to a murderer who is a female in her thirties with black hair and extraordinary surgical skills, but we “cannot say FOR SURE” who she is, then this is not the same, as you claim, as knowing NOTHING. Similarly, as explained, the outcome tells us a lot about the randomness of the production. Origenes
Nonlin.org: Your fourth point: d) You say:
d) See b) Also, “action of weather” is just an intermediate step, not the ultimate source of the patterns. And the point was: if you see a pattern you know for sure it’s not just random – there’s a regularity behind it which is indistinguishable from design. Even chaos theory patterns are the result of a designed system: https://en.wikipedia.org/wiki/Chaos_theory. I am pretty sure no one can explain why – based on the known laws of physics – dunes, hurricanes, galaxies, etc. have these precise patterns and not other patterns.
Chaos systems are fully deterministic systems. From the Wikipedia page:
Small differences in initial conditions such as those due to rounding errors in numerical computation yield widely diverging outcomes for such dynamical systems, rendering long-term prediction of their behavior impossible in general.[2][3] This happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, with no random elements involved.[4] In other words, the deterministic nature of these systems does not make them predictable.[5][6] This behavior is known as deterministic chaos, or simply chaos.
IOWs, they are fully deterministic, and they are not designed, in the sense that I have discussed. Again, I am not discussing if the laws that determine their form, which are the same laws of nature that we know, are designed or not. But there can be no doubt that, once those laws are accepted as part of the system, the results are determined. The only peculiar property of chaotic systems is that they have some physical properties that make the math that describes them highly sensitive to small differences in initial conditions. That's why it is impossible to predict their behaviour, even if they are deterministic. And they cannot even be well described probabilistically. Special math is required to treat them satisfactorily. But they are, again, an example of necessity, not of design. Your last statement is rather vague. You say: "I am pretty sure no one can explain why – based on the known laws of physics – dunes, hurricanes, galaxies, etc. have these precise patterns and not other patterns." I am not so sure as you are. I think that the known laws of physics explain pretty well why dunes and hurricanes are formed. There is no reason to explain a specific contingent pattern, as there is no reason to explain some specific sequence of heads and tails that arise from on single sequence of coin tossing. Laws explain contingency in general, but of course we cannot "explain" each single contingent pattern, not because it is not possible, but because we have not precise knowledge of all the values of the different variables. That's why we treat those deterministic systems probabilistically. As I have already explained to you at #112, also quoted at #167. I quote it again here: “Usual randomness just means that there is some system whose evolution is completely deterministic, but we can’t really describe its evolution in terms of necessity, because there are too many variables, or we simply don’t know everything that is implied. In some cases, such a system can be described with some success using an appropriate probability function. Probability functions are well defined mathematical objects, which can be useful in describing some real systems. A probabilistic description is certainly less precise then a necessity description, but when the second is not available, the first is the best we can do. A lot of empirical science uses succesfully probabilistic tools.” Regarding galaxies, I would be cautious. Science certainly believes that they can be explained according to known laws, but as in all matters in astrophysics, nothing is really completely understood. There are models, of course, but models have a serious tendency to last only for a short time in astrophysics! :) So, I would be cautious. Anyway, if they can be explained by laws, we cannot infer design for them, even if we cannot explain each contingent form they assume. gpuccio
OLV: Thank you! :) gpuccio
Nonlin.org: Your third point: c) You say:
The watch can be nonfunctional (as in watch sculptures) and will still show design. Again, the regularities of the shapes and materials is enough.
You are mentioning two different design inferences, for two different functions. Of course design can be probably inferred for regularities of shape and materials, and in that case all materials that exhibit some regularity in shape and material could be considered designed. But of course you have to define well what you mean as "regularity" here. But of course a watch is functional mainly because it measures time. Most of its functional complexity can be traced to measuring time: the specific choice of parts (there are many parts with regularities, but only some of them can be used to make a watch), and in particular thier specific assemblage, and the tweaking of each part to be compatible with the others, and so on. There is no doubt that Paley intended this kind of functionality, when he chose a wacth as his example of design inference. The inference for the watch based on its true function is much stronger than an inference for some well formed part that could be used for some generic purpose. A gear is most certainly a designed object, but a watch has much greater functional information, if we define the correct function for it. gpuccio
gpuccio, excellent explanations! Serious textbook material. Thanks. OLV
So Allan Keith makes an ignorant claim in comment 177, gets called on it (178) and runs away. Typical but still pathetic ET
Nonlin.org: Your second point: b) You say:
See first paragraph above. Nature doesn’t come with laws – they are written by humans based on what we observe. In addition, we continue to rewrite “the laws” based on new observations. If you disagree with “design = regularity” you should explain how you differentiate between the two. Just because “once the laws exist, there is no need for any conscious intervention for them to operate”, doesn’t mean “the results of laws are not designed”. If I design and set up a widget making machine, you better believe those widgets have been designed by me – the creator of the machine that makes them under my laws.
I agree that we write the laws, but we write them to explain, with the best of our understanding, regularities that are really present in nature. So, maybe naure does not come with laws, but it certainly comes with regularities. Our laws are our way to describe those regularities. Let's say that they are huma approximations of the "real law" that acting in nature. The idea is that our human approximations can certainly change and be made more precise, but there is no reason to believe that "the real law" changes at all. Of course I disagree with "design = regularity". There is no necessary regularity in design. If I write a poem, it's not that I aim at regularity: I aim at meaning. How do I "differentiate between the two"? It's easy. If I see configurations that are fully explaine by laws alredy exisitng in the system, or by an appropriate probability distribtion which describes well the system, then I have no reasons to infer detectable design. But if I observe complex funtional information, I infer design. As I have tried to explain, in ID (excluding the cosmological application) we are not asking if the laws that we know to operate in nature are designed. That is a cosmological issue. We are asking if we are observing an object that cannot be explained by those laws, or by any reasonable probabilistic result, and requires instead an explicit intervention by a conscious being in the system and in the allotted time window to emerge. If system S at time A already includes a computer that is operating an existing software, all that the software can compute will be a result that can be explained in that system without any design intervention after the initial state A is set. But if we observe that some configuration arises in the system that cannot be computed by the resources that are already part of the system, either they are non designed resources or designed resources (for example, the computer and the software), then we infer a design intervention in the system in the time window. For example, let's say that a Shakespeare sonnet emerges in the system during its transition from state A to state B, and that the computer included in the system at state A has not the information to output that poem (it has not it in memory anywhere, and of course it has no probability of deriving it from a computation). Then we infer design: someone had to introduce the FSI of the poem into the system, in the time window which goes from A to B. So, it's not relevant here if the laws of the universe are designed ot not: if I observe in system S a functional result that cannot be explained by the laws of the universe, and whose probability in the system is infinitesimal, then I can infer design in the system. More in next post. gpuccio
Nonlin.org: The second part of your first point: 2) Complexity. We have seen that the FSI linked to a function is essentially the number of specific bits of information that are necessary to implement the explicitly defined function. This is, of course, a continuous variable, and it is corresponds to -log2 of the target space/search space ratio. We can derive a binary variable from the continuous variable by a threshold, so that we have: complex functional information yes/no. What threshold? It's rather simple. The purpose of our reasoning is to ascertain that our functional configuration is so unlikely in the system that we can safely reject the null hypothesis that the observed effect (the function) can reasonably emerge in the system as a random result. Therefore, the threshold must be appropriate for the system. The property of the system that we have to consider is its probabilistic resources: IOWs, the number of attempts (configurations) that can be tried (reached) in the system, in the allotted time window. The binomial distriution is extremely useful to compute probabilities of success with repeated attempts. For example, if some result has a probability of 0.001 in a single attempt, the probability of observing at keast one such result in 10 attempts will be slightly less than 1%, but the probability with 200 attempts is about 18%. Therefore, the probabilistic resources of the system are very important. For a biological system on out planet, I think that 200 bits are a very appopriate threshold. However, in a general discussion, I usually stick to 500 bits, because that threshold is good even if we consider the probabilistic resources of the whole universe throughout its entire existence (it's Dembski's UPB). More in next post. gpuccio
Nonlin.org: Now, your first point: a) You say:
Interesting, but your FSI definition seems dependent on a particular intelligent agent and a very specific function. And “complex” is just having FSI above the threshold? Hmm, what threshold, and what’s the point of all this? The answer is probably hidden somewhere in your many posts and comments, but that’s not very helpful.
OK, I will dig the answer for you and offer it briefly here. Indeed, it's two different answer. 1) Yes, my definition of FSI does use "a particular intelligent agent and a very specific function". But it does not depend on them. Why? Because any oberver can define any function, and FSI for that function can be measured objectively, once the function is objectively and explicitly defined. IOWs, I can measure FSI for any explicilty defined function that the object can implement. So, is there an objective FSI for the object? Of course not. But there is an objective FSI fo each explicitly defined function that the object can implement. Now, please, consider the following point with great attention, because it is extremely important, and not so intuitive: If we can show even one single explicitly defined function that the object can implemet as defined, and which is complex (see next point for that), we can safely infer that the object is designed. Excuse me if I insist: stop a moment and consider seriously the previous statement: it is, indeed, a very strong statement. And absolutely true. One single complex function implemented by an object is enough to infer design for it. Another way to say it is that non designed objects cannot implement complex functions. Never. In next post, I will discuss the issue of complexity, and of the related threshold. gpuccio
Nonlin.org: Thank you for your very reasonable comments at #197. I appreciate that you are really trying to understand my points. I am trying to do the same with yours. I think that we agree on many important things, and disagree on a few equally important things. But trying to understand each other's views is always a good thing. So, I will answer your points in detail, in more than one post, so that each point can be adequately discussed. In this firs post, I will just offer two general, important premises. a) First premise. Our epistemologies are probably different, but not necessarily too different. I have had a look at your page about science, philosophy and religion, and I think that I would agree with most of the things you say there. So, I will just summarize here my views about that issue, and let you decide how different they are from yours. Science, philosophy and religion are three different modalities of human cognition. I fully agree that they are strongly connected, and that they are only different facets of our search for truth. But I think that they have important specificities, that allow us to distinguish between them, and to recognize specific fileds of application fro each of them, even is of course partially overlapping. I strongly believe that the methods and procedures of each of the three types of cognition are linked to the specific field of application. When each of the three types of co0gnition correctly applies its procedures to its field, the results are of great value, and they help and support the other types of cognition. Instead, when on type of cognition tried to apply the pprocedures of another type of cognition to its specific field, the results are very bad, and they simply generate cognitive confusion. So, good science supports philosophy and religion, and good philosophy (or religion) do support science. But bad science creates probles to philosophy and religion, and bad philosophy (or religion) are a real problem to science. For example, philosophy of science and epistemology are extremely important for science, of course, but they are philosophical issues, not scientific issues. This "first premise", therefore, is a philosophical argument, not a scientific one. That said, I will remind here that ID theory is a scientific theory, not a philosophy. This is important for the discussion that will follow. b) Second premise. This point is fundamental for all the discussion, so I will try to be as clear as possible. ID theory, at least in its biological aspect, is not about design in general: it is about design detection. Therefore, ID is not really interested in design in general, it is only interested in detectable design. This is very important. The purpose of ID is not to detect all forms of design, or to exclude design. ID theory can do neither of those two things. It cannot detect all designs, because many designs are not detectable. It cannot ever exclude design, because we cannot exclude design that is undetectbale. So, what is the purpose of ID, as applied for example to biological objects? It is to detect objects for which we can affirm a design origin, with reasonable empirical safety. I will exclude from the following discussion cosmological ID. Not because it is not a valid form of ID. It is, definitely. But because it is a reasoning about the design of the whole universe, and the things that I will say here do not apply to that scenario. Cosmological ID is a very valid argument, and I do believe that it demostrates, reasonably, that the whole universe is design, especially in its very scientific form based on fine-tuning of the fundamental constants of the universe. But biological ID has very specific aspects, that are different from those of cosmological ID. And it's those aspects that I am going to discuss. So, what do we mean by "design detection", if we exclude the cosmological problem? Design detection is a concept that applies to specific and well defined physical systems. It is never a generic statement. We detect design in specific objects, in a specific and well defined physical system. So, let's say that we define a physical system S, and we define two states of it , A and B, and the time window between A and B. So, we can say that the system S evolves from state A to state B in the time window t. Now, let's say that some configuration F arises in some object included in S during the time window t. IOWs, configuration F was not in A, and it is observed in B. Configuration F is not a generic configuration: it is a functional configuration. IOWs, the object with configuration F can implement a well defined function. I will deal with these points in detail later. Now, the point is: if we assume that system S evolves by the laws of nature that we understand, and that its configurations obey some probabililty distribution that we can effectively use to describe the system and its evolution. is configuration S (and the associated functionality) likely enough in the appropriate probability distribution? Or is it an extremely unlikely result? IOWs, if we draw a binary partition in the space of all possible configurations that system S can reach according to known laws of nature, is the target space of all the configurations that implement F extremely small, let's say infinitesimal, if compared to the whole search space? What we are trying to assess here is not if F could be designed, or if it could be random. We are trying to assess if we can reasonably be sure that it is designed. IOWs, that some conscious intelligent and purposeful agent intervened in system S, during the time window t, to generate configuration F out of his conscious representations and by his intentional acts. Changing the spontaneous evolution of system S according to known natural laws. To be more clear, let's say that our system S is a beach, with its connected events, like wind, rain and so on. From one day to the following, we observe that a small heap of sand appears on the beach, that was not there the day before. Now, let's say that we had placed a camera to observe the beach during the lasdt 24 hours. Let's say that we see in the camera recording one of the two following things: a) The wind moves the sand, and at some point the heap is formed. b) At some point, a child comes to the beach, and builds the heap with his hands. Then he goes away. In this case, we have direct observation of the process, by the camera. We say that, in case a), the heap is not a designed object in that system: the wind is part of the system, and we have no reasons to beolieve that it is a conscious intelligent being. In case b, however, we are sure that the heap is designed, because a child is a conscious intelligent being. But, of course, in the cases weher we apply ID theory we have no camera, and no direct observation of the process, or of the designer. So, if we just observe the heap, can we infer design? Is the heap a configuration that exhibits detectable design? Of course not. The point is that the heap could have very reasonably arisen from the action of the wind, even if it was instead built by a child. Design, even if present, is not detectable. But let's say that the object we observe on day two is not a heap, but a Shakespeare's poem written in the sand, by the shoreline. In this case, we do infer desing, and correctly. Why? Because here we have a configuration F which has a very specific function (meaning), and is utterly unlikely as a result of waves or wind or any other component of system S. So, again, we are interested only in detectable design, not design in general. And functional complexity is the tool to detect design when it is detectable. More in next post. gpuccio
gpuccio@167 You can’t fight Darwinism while uncritically accepting their nonsensical myths. Since when is science separate from philosophy/religion?!? There’s a very good reason why Newton wrote “Philosophiae Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy)” and why current advanced degrees in science are called PhD. Furthermore, here is logical proof that this separation is artificial and nonsensical: http://nonlin.org/philosophy-religion-and-science/ Science = Observation + Assumptions, Facts Selection, Extrapolations, Interpretations… Assumptions, Facts Selection, Extrapolations, Interpretations… = Sum of Axiomatic Beliefs Sum of Axiomatic Beliefs = Religion …therefore, Science = Observation + Religion a) Interesting, but your FSI definition seems dependent on a particular intelligent agent and a very specific function. And “complex” is just having FSI above the threshold? Hmm, what threshold, and what’s the point of all this? The answer is probably hidden somewhere in your many posts and comments, but that’s not very helpful. See f) questions too. b) See first paragraph above. Nature doesn’t come with laws – they are written by humans based on what we observe. In addition, we continue to rewrite “the laws” based on new observations. If you disagree with “design = regularity” you should explain how you differentiate between the two. Just because “once the laws exist, there is no need for any conscious intervention for them to operate”, doesn’t mean “the results of laws are not designed”. If I design and set up a widget making machine, you better believe those widgets have been designed by me – the creator of the machine that makes them under my laws. c) The watch can be nonfunctional (as in watch sculptures) and will still show design. Again, the regularities of the shapes and materials is enough. d) See b) Also, “action of weather” is just an intermediate step, not the ultimate source of the patterns. And the point was: if you see a pattern you know for sure it’s not just random - there’s a regularity behind it which is indistinguishable from design. Even chaos theory patterns are the result of a designed system: https://en.wikipedia.org/wiki/Chaos_theory. I am pretty sure no one can explain why - based on the known laws of physics - dunes, hurricanes, galaxies, etc. have these precise patterns and not other patterns. e) We’re assuming the designer is invisible and you can only determine design based on the output. Yes, something can be designed to look random, but I don’t think we’re concerned with that scenario. Fact is, we see a lot of patterns that are clearly nonrandom especially in biology. In fact I can’t think of anything in nature that can be attributed 100% to randomness. Even the atmospheric noise used as random generator has a deterministic component in its statistics and boundaries. Yes, I got your comment 112. You refer to chaos theory. That’s compatible with my “randomness is ONLY a theoretical concept”, so why do you disagree, and how would you prove it wrong? I don’t agree with “all phenomena that we describe as random are completely deterministic”. How would you know? And if quantum events are an exception, and of course all particles are subject to quantum events, then of course all systems are NOT deterministic. f) Metal boats don’t sink, nor do the Gerridae insects, etc. Wood doesn’t sink either because of its designed structure. Shape matters! Your example fails not just because of these counterexamples but because it substitutes the intrinsic properties of materials for objects. But materials are not objects. You use “complex” and “functional” again, but (without reading all your posts) it’s not clear what these words mean. “Functional” seems contingent on the needs of an agents and an arbitrary “search space” and arbitrary “good stones”. Is “complex” = 500 bits = minus log (base 2) of (Target space/Search space)? Again, maybe you got something there, but you really need to do a better job clarifying and simplifying. g) Maybe after further clarifications for a) and f). My point was that if (as we both agree) P(random) =~ 0, then we generally stop talking in probabilistic terms. Example: we say “sunrise will be at 7am tomorrow” not “there’s a 99.999(9)% chance the sunrise will be at 7am tomorrow” Nonlin.org
Origenes and DATCG: Here are a few papers about transcriptional regulation in prokaryotes. This is about the role of TFs (activators and repressors): An overview on transcriptional regulators in Streptomyces. https://www.ncbi.nlm.nih.gov/pubmed/26093238
Abstract: Streptomyces are Gram-positive microorganisms able to adapt and respond to different environmental conditions. It is the largest genus of Actinobacteria comprising over 900 species. During their lifetime, these microorganisms are able to differentiate, produce aerial mycelia and secondary metabolites. All of these processes are controlled by subtle and precise regulatory systems. Regulation at the transcriptional initiation level is probably the most common for metabolic adaptation in bacteria. In this mechanism, the major players are proteins named transcription factors (TFs), capable of binding DNA in order to repress or activate the transcription of specific genes. Some of the TFs exert their action just like activators or repressors, whereas others can function in both manners, depending on the target promoter. Generally, TFs achieve their effects by using one- or two-component systems, linking a specific type of environmental stimulus to a transcriptional response. After DNA sequencing, many streptomycetes have been found to have chromosomes ranging between 6 and 12Mb in size, with high GC content (around 70%). They encode for approximately 7000 to 10,000 genes, 50 to 100 pseudogenes and a large set (around 12% of the total chromosome) of regulatory genes, organized in networks, controlling gene expression in these bacteria. Among the sequenced streptomycetes reported up to now, the number of transcription factors ranges from 471 to 1101. Among these, 315 to 691 correspond to transcriptional regulators and 31 to 76 are sigma factors. The aim of this work is to give a state of the art overview on transcription factors in the genus Streptomyces.
This is extremely interesting, about the role of DNA loops: DNA Looping in Prokaryotes: Experimental and Theoretical Approaches https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3591992/
ABSTRACT: Transcriptional regulation is at the heart of biological functions such as adaptation to a changing environment or to new carbon sources. One of the mechanisms which has been found to modulate transcription, either positively (activation) or negatively (repression), involves the formation of DNA loops. A DNA loop occurs when a protein or a complex of proteins simultaneously binds to two different sites on DNA with looping out of the intervening DNA. This simple mechanism is central to the regulation of several operons in the genome of the bacterium Escherichia coli, like the lac operon, one of the paradigms of genetic regulation. The aim of this review is to gather and discuss concepts and ideas from experimental biology and theoretical physics concerning DNA looping in genetic regulation. We first describe experimental techniques designed to show the formation of a DNA loop. We then present the benefits that can or could be derived from a mechanism involving DNA looping. Some of these are already experimentally proven, but others are theoretical predictions and merit experimental investigation. Then, we try to identify other genetic systems that could be regulated by a DNA looping mechanism in the genome of Escherichia coli. We found many operons that, according to our set of criteria, have a good chance to be regulated with a DNA loop. Finally, we discuss the proposition recently made by both biologists and physicists that this mechanism could also act at the genomic scale and play a crucial role in the spatial organization of genomes.
And, finally, regulatory RNAs: When eukaryotes and prokaryotes look alike: the case of regulatory RNAs https://academic.oup.com/femsre/article-abstract/41/5/624/4080139?redirectedFrom=fulltext
Abstract: The discovery that all living entities express many RNAs beyond mRNAs, tRNAs and rRNAs has been a surprise in the past two decades. In fact, regulatory RNAs (regRNAs) are plentiful, and we report stunning parallels between their mechanisms and functions in prokaryotes and eukaryotes. For instance, prokaryotic CRISPR (clustered regularly interspaced short palindromic repeats) defense systems are functional analogs to eukaryotic RNA interference processes that preserve the cell against foreign nucleic acid elements. Regulatory RNAs shape the genome in many ways: by controlling mobile element transposition in both domains, via regulation of plasmid counts in prokaryotes, or by directing epigenetic modifications of DNA and associated proteins in eukaryotes. RegRNAs control gene expression extensively at transcriptional and post-transcriptional levels, with crucial roles in fine-tuning cell environmental responses, including intercellular interactions. Although the lengths, structures and outcomes of the regRNAs in all life kingdoms are disparate, they act through similar patterns: by guiding effectors to target molecules or by sequestering macromolecules to hamper their functions. In addition, their biogenesis processes have a lot in common. This unifying vision of regRNAs in all living cells from bacteria to humans points to the possibility of fruitful exchanges between fundamental and applied research in both domains.
Very interesting. :) gpuccio
Origenes: Yes, the title is probably misleading, but in general I would commend the article as a very good example of research. It asks the right questions, and gives the right answers. It is precise in the explanation of the problem, in the descritpion of data and results, and in the discussion. As already said, I would have appreciated some more details about the procedures of population treatment and expansion, and maybe artificial selection, so that the number of total mutations and the mutation rate could be more explicitly considered. However, I will read again the whole paper with more time and attention to see if those data can be gathered from what they say. All considered, this is a very good paper. I think I will quote it often in my future discussions. As for TSZers, I think they get excited for the wrong things all the time! :) gpuccio
GPuccio DATCG @ “Random sequences rapidly evolve into de novo promoters”. This title is misleading, since, as it turns out, only a minor part of the random sequences (12 nucleotides; see #181) is being ‘evolved’. And I suspect that it is the title that got the participants at TSZ going. Perhaps it is important to point out a folklore among scientists who are sympathetic to evolution, namely, to choose misleading titles. This long-standing tradition was set off by Darwin when he opted for the title “On the origin of species.” A small anecdote: more than a decade ago, when I was blissfully unaware of the existence of UD, I had convinced myself that there could not exist a step-by-step evolutionary explanation for snake fangs. First a venom gland and no delivery system or vice versa? I was sure that this was impossible. Then, in 2008, I was hit with the following (misleading) titles:
Snake-Fang Evolution Mystery Solved -- "Major Surprise" (National Geographic)
&
Evolving Snake Fangs, by PZ Myers at Panda's Thumb. PZMyers: I keep saying this to everyone: if you want to understand the origin of novel morphological features in multicellular organisms, you have to look at their development.
Both articles are based on the paper “Evolutionary origin and development of snake fangs”, by F.J.Vonk et al, 2008. And, yes, this paper also carries a misleading title, simply because the paper is not about the evolutionary origin of snake fangs! Nowhere in the paper is an attempt to describe a step-by-step evolutionary process how a snake fang could evolve. So what is the paper about? Livescience.com explains in an article with the misleading title “How Snakes Got Their Fangs”.
To figure out how both types of snake fangs evolved from non-fanged species [<<< extremely misleading!], Vonk and his colleagues looked at fang development in 96 embryos from eight living snake species. The team's analyses showed that the front and rear fangs develop from a separate teeth-forming tissue at the back of the upper jaw. "The uncoupled rear part of the teeth-forming tissue evolved in close association with the venom gland, thereafter forming the fang-gland complex," Vonk said. "The uncoupling allowed this to happen, because the rear part of the teeth-forming tissue did not have constraints anymore from the front part."
Aha! That’s all folks. No step-by-step explanation of the snake venom system at all. Zero. Zip. The ‘explanation’ by Vonk is that “it was allowed” … - - - - - Ironically in the same article the writer of the paper got completely carried away:
"The snake venom system is one of the most advanced bioweapon systems in the natural world," said lead researcher Freek Vonk of Leiden University in the Netherlands. "There is not a comparable structure as advanced, as sophisticated, as for example a rattlesnake fang and venom gland."
Origenes
Gpuccio @191, Thanks for follow-up. Evolvability: The last sentence in your quote, referenced in the paper is interesting as I've always thought regulatory controls need to be in place, not after the fact catching up...
Furthermore, such stripped down promoters can serve as an evolutionary stepping-stone until regulation evolves, perhaps also by stepwise point mutations.
noting: "perhaps" ;-) I have to check back in later. Great OP as usual. I think your eplanations on TSS are clear, and shooting down the Deck of Cards fallacy, excellent in #859 of Ubiquitin OP. I'm not sure there's much more you can say to someone on that subject, if they cannot comprehend your well written explanation. But evidently it has to be continually shot down. . DATCG
And to point out, promoters upstream of the gene - intergenic regions - once known as "JUNK" DNA and still referenced as such. I think these intergenic regions will continue to turn up functional control elements. So, expressions can be turned on/off rapidly by a single element = functionally designed conditional control elements. These type of simple, conditional control elements are utilized constantly for programming of variable designed outcomes based upon variable inputs. A single byte in a conditional table can kick off a different "expression" or subroutine and outcome. DATCG
DATCG and Origenes: I think the tradeoff they are referring to is between: a) Specificity and b) Evolvability Keeping a low specificity at the level of the promoter make it easier to evolve de novo promoters at new sites by RV, but at the same time makes the function less specific, and therefore allows for an easier generation of "undesired targets", that have to be eliminated:
The rapid rate at which new adaptive traits appear in nature is not always anticipated, and the mechanisms underlying this rapid pace are not always clear. As part of the effort to reveal such mechanisms59, our study suggests that the transcription machinery was tuned to be “probably approximately correct”60 as means to rapidly evolve de novo promoters. Setting a low threshold for functionality, on one hand, while eliminating the undesired off-target instances on the other hand, makes a system where new beneficial traits are highly accessible without enduring the low-specificity tradeoffs. Further work will be necessary to determine whether and how similar principles affect the regulatory network and protein–protein interaction network in bacteria as well as in higher organisms.
Moreover, the authors are not suggesting in any way that transcription regulation is not complex in bacteria. They are just saying that the gross role of the promoter as a site for binding of RNA polymerase and transcription initiation is rather simple. Regulation of transcription takes place at many other complex levels: TFs, enhancers, chromatin states, and so on. Moreover, they also recognize that the wildtype promoter has probably a more complex role in regulation, not present in the randomly evolved form:
Despite generating expression levels similar to the WT lac promoter, the promoters evolved in our library are of very low complexity, as most of the activating mutations involved no additional factors but the two basic promoter motifs. Although the evolved promoters likely have no regulation, we hypothesize that such crude promoters might play an important role in the evolution of the transcriptional network, as newly activated genes do not necessarily require the regulated/induced expression in order to confer significant advantage. Furthermore, such stripped down promoters can serve as an evolutionary stepping-stone until regulation evolves, perhaps also by stepwise point mutations.
Emphasis mine. So, the complexity of the wildtype is certainly higher, because it has added regulatory functions. The crude promoters that evolved here can only implement the binding site for RNA polymerase. And please, note the "perhaps" in the last sentence. The authors are certainly not fools! :) gpuccio
Origenes, interesting paper, thanks for the link. I think the "trade-off" they speak of is flexibility in that if it was to specific, gene expression might be to limited for bacteria? And then they state:
Further work will be necessary to determine whether this flexibility in transcription is also present in higher-organisms and in other recognition processes.
That might be an interesting look. My initial thought is "Flexibility in transcription" for gene expression is conditional and Context Dependent for higher-organisms. More regulatory control limitations than bacteria. I was thinking of color for polar bears. But a quick search turned up "different fur pigment" for rabbit. Himalayan rabbits! :) ha! Along with other examples of promoter and transcription... http://ib.bioninja.com.au/higher-level/topic-7-nucleic-acids/72-transcription-and-gene/gene-expression.html
Control Elements The DNA sequences that regulatory proteins bind to are called control elements Some control elements are located close to the promoter (proximal elements) while others are more distant (distal elements) Regulatory proteins typically bind to distal control elements, whereas transcription factors usually bind to proximal elements Most genes have multiple control elements and hence gene expression is a tightly controlled and coordinated process
The environment of a cell and of an organism has an impact on gene expression Changes in the external or internal environment can result in changes to gene expression patterns Chemical signals within the cell can trigger changes in levels of regulatory proteins or transcription factors in response to stimuli. This allows gene expression to change in response to alterations in intracellular and extracellular conditions
A prescriptive adaptability flexible when needed but still under control.
There are a number of examples of organisms changing their gene expression patterns in response to environmental changes: Hydrangeas change colour depending on the pH of the soil (acidic soil = blue flower ; alkaline soil = pink flower) The Himalayan rabbit produces a different fur pigment depending on the temperature (>35ºC = white fur ; <30ºC = black fur) Humans produce different amounts of melanin (skin pigment) depending on light exposure Certain species of fish, reptile and amphibian can even change gender in response to social cues (e.g. mate availability)
Maybe Gpuccio can add to this if I'm going in wrong direction. Obviously, there's a difference from bacteria to flowers to rabbit's survival in the cold. But I'm thinking in Eukaryotes, the regulatory system is more tightly controlled than in bacteria? . DATCG
Gpuccio @167, You are to be commended for your patience in explanation once again. DATCG
OLV @ 153. Yep, saw that citing, thanks :) DATCG
Origenes@162 Nonlin: “It’s consistent” means absolutely nothing. Fact is, you cannot say FOR SURE.
This is where you go wrong. Given a large enough set, if some result is consistent with a random production, then this obviously MEANS (yes it does mean something) that we cannot exclude the possibility of random production — even though it does not provide a basis to be sure. On the other hand, given a large enough set, if a result is not consistent with a random production, then it tells us also something. Question to Nonlin: what can that be?
So you agree one "cannot say FOR SURE" (rephrased as "cannot exclude the possibility of random production") but then claim I am wrong? Where's your Logic, amigo? Your problem is that Darwinistas illogically claim "randomness" left and right when in fact one "cannot say FOR SURE" is the most you should claim. And when you see a pattern such as all biological patterns, you can calculate the probability of that pattern if it were random. Guess what? Those probabilities are almost always zero indicating non-randomness (that's why this OP discussing probabilities makes zero sense). Is anyone surprised that DNA / kidney / flowers / etc. shape is non-random? How can they "arise" from "random" mutations? Total nonsense. Truth Will Set You FreeApril @ 166 You can't be taken seriously with unsubstantiated claims. Nonlin.org
Origenes:
This was most helpful. As you have often argued in your OP’s, this is well within the reach of RV & NS.
Yes, it is! As I have argued at #185, it's much easier than penicillin resistance. Functions linked to nucleotide sequences are in base four. Therefore the combinatorics is less extreme, as related to sequence length. 12 nucleotides is a search space of 24 bits. 12 AAs is a search space of 52 bits. That's a huge difference! By the way, have you noticed that the paper is about computing, by experiment and math, the probabilities of generating one specific functional target, even if a simple one? Is the paper fatally flawed as an example of TSS fallacy? Didn't the reviewers understand that? :) gpuccio
OLV:
Is the promoter paper about what is called microevolution?
Yes, definitely. A functional transition of 1 nucleotide + 1 nucleotide optimization is much simpler than, say, penicillin resistance, where you need 1 AA + a few AA optimization. One AA is 4.3 bits of information, while one nucleotide is only 2 bits. So, this is really a simple transition.
Can we say that any random sequence contains certain amount of the so-called Shannon information?
Yes, of course. That's why I say that they are implicitly speaking of functional information: the set of functional sequences, that will implement the function of providing a promoter. This is pure ID theory. gpuccio
GPuccio @173 @181 Thank you for you comments on the paper Random sequences rapidly evolve into de novo promoters, by A.H.Yona et al.
GP: The only point that could be misleading is the freequent reference to “a sequence space of ~100 bases”. This is technically correct, because they used sequences 103 nucleotides long, but it is misleading, because the functionally relevant sequences are much shorter, corresponding to the consensus sequence, essentially 6 + 6 nucleotides, and maybe a few more at other positions. So, the real sequence space is essentially the sequence space of 12 nucleotides, 4^12, 16.8 million states, 24 bits.
This was most helpful. As you have often argued in your OP's, this is well within the reach of RV & NS. Origenes
gpuccio, Regarding the last piece of text you quoted: “...they represent the non-functional sequence space, without biases, as they contain no information.“ Can we say that any random sequence contains certain amount of the so-called Shannon information? Does the expression “the nonfunctional” in the quoted text serve as an implicit qualifier to the last word “information” in the same sentence” Thank you. Oscar Luis OLV
gpuccio, Is the promoter paper about what is called microevolution? Thank you. Óscar Luis OLV
Origenes: I would like to point at a few passages from the promoter paper, to show how the authors are very correctly using and applying the main concepts of ID theory.
To systematically study the evolution of de novo promoters, one should start from non-functional sequences.
For such genomes, random sequences can serve as a null model when testing for functionality without introducing biases or confounding factors due to deviating from the natural GC content of the studied genome.
The number of mutations needed in order to change a random sequence into a functional promoter is not clear. Especially in experimental and quantitative terms, the question is how many mutations does one need in order to make a functional promoter, starting from a random sequence of a specific length? This question can be addressed directly by experimental evolution.
Substantial promoter activity can typically be achieved by a single mutation in a 100-base sequence, and can be further increased in a stepwise manner by additional mutations that improve similarity to canonical promoter elements. We therefore find a remarkable flexibility in the transcription network on the one hand, and a tradeoff of low specificity on the other hand, with interesting implications for the design principles of genome evolution.
Emphasis mine. The emphasis on the "low specificity" is important. This is a low specificity result, and it certainly gives some flexibility. But flexibility requires control. It is, as correctly stated, a "tradeoff":
Tuning the promoter recognition machinery to such a low specificity so that one mutation is often sufficient to induce substantial expression is crucial for the ability to evolve de novo promoters. If two or more mutations were needed in order to create a promoter, cells would face a much greater fitness-landscape barrier that would drastically reduce their ability to evolve the promoters de novo.
And:
Setting a low threshold for functionality, on one hand, while eliminating the undesired off-target instances on the other hand, makes a system where new beneficial traits are highly accessible without enduring the low-specificity tradeoffs.
Emphasis mine.
To broadly represent the non-functional sequence space, we used random sequences (generated by a computer) with equal probabilities for all four bases
This experimental observation was therefore consistent with the expectation that a random sequence is unlikely to be a functional promoter.
This is correct. The evolution of the promoter required at least one specific mutation, and that required many passages, and therefore some probabilistic resources (which, unfortunately, cannot be exactly computed from the data in the paper: this is the only minor flaw in it, IMO). IOWs, they had to test quite a number of states, before finding the specific functional one nucleotide mutations. As said before, it is a complex result, but not very complex at all. Perfectly in the easy range of a bacterial system. The basic function has a 2 bit complexity, but even that simple result requires some probabilistic resources. e must remember that the rate of mutations is about 10^-9 per replication per site.
Each mutation was inserted back into its relevant ancestral strain, thus confirming that the evolved ability to utilize lactose is due to the observed mutations.
A very important control, that is rarely found in similar experiments. Very good! :)
Next, we aimed to determine the mechanism by which these mutations induced de novo expression from a random sequence.
Correct. Understanding the mechanisms is fundamental! :)
The lab evolution results from RandSeq1, 2, and 3 indicate that de novo promoters are highly accessible evolutionarily, as a single mutation created a promoter motif that enabled growth on lactose, suggesting that a sequence space of ~100 bases might be sufficient for evolution to find an active promoter with one mutational step.
This is perfect ID logic. This is the way to test hypotheses about functional information and functional landscapes. The only point that could be misleading is the freequent reference to "a sequence space of ~100 bases". This is technically correct, because they used sequences 103 nucleotides long, but it is misleading, because the functionally relevant sequences are much shorter, corresponding to the consensus sequence, essentially 6 + 6 nucleotides, and maybe a few more at other positions. So, the real sequence space is essentially the sequence space of 12 nucleotides, 4^12, 16.8 million states, 24 bits. We must remember that the essential function of this consensus sequence is to allow the binding of the RNA polymerase.
These random sequences (generated in Matlab) were used as starting sequences for promoter evolution because they represent the non-functional sequence space, without biases, as they contain no information.
Emphasis mine. This is very interesting. Here, they are using the concept of functional information, without even specifying it! Of course they are speaking of funtional information, when they say: "they contain no information". :) gpuccio
AK @ 177: I assumed we were speaking of IC structures, because that's the mode of the thread. If that assumption wasn't shared, please excuse me. When producing IC structures out of already functioning components, you run out of those single steps. Obviously, if the whole of the system can be parted neatly into independently useful components, it's not IC. Every bit of the structure that can't operate independently must then be produced and/or the components must be modified to interface and operate within the greater system, as well as configured - positioned and sequenced in construction order, etc. within a single step. LocalMinimum
The only reason probability arguments are used is because there isn't anything else. Meaning there aren't any experiments to call on. There isn't even a methodology to test the claims. What I don't understand is why evos don't think that is a problem. ET
Allan:
For example, the difference between an injectisome and a flagellum is not that great.
They are both IC. And they both require different command and control. How many specific mutations would it take to evolve a flagellum for your injectisome? Do you have any idea if such a transformation can be had via genetic changes? ET
LocalMinium,
When producing a structure out of functional, selected for substructures, you have to modify the substructures from independent functionality to properly networked dependent functionality, as well as develop the structure of the intersection. Naturally, this has to be done in a single step, otherwise it will be selected against by the loss of the selected for functions (critically if the loss of function is fatal).
Not if the individual steps are equally or more fit than the original structures. For example, the difference between an injectisome and a flagellum is not that great. The change would require the loss of function as an injectisome but the function of a flagellum may more than offset this loss of function. Allan Keith
AK @ 160:
But the arguments often used by ID are to look at an extant structure and calculate the probability of it arising randomly, as if it arose in one step as a fully formed structure.
When producing a structure out of functional, selected for substructures, you have to modify the substructures from independent functionality to properly networked dependent functionality, as well as develop the structure of the intersection. Naturally, this has to be done in a single step, otherwise it will be selected against by the loss of the selected for functions (critically if the loss of function is fatal). So, you essentially need to not only make a new set of structures out of old, you have to make the previously unselected for "glue" networking structure as well, which itself is going to be even more complex if you're making all the pre-existing structures "plug-n-play" biology. And make them all land in the right places, right orientations, etc. (configuring your bag of parts isn't free, either) Relying on previously non-functional, unselected for components of the composite system lying around is no better than expecting it to arise all at once (if you don't constrain the range of the random mutation function, which evolutionists don't, because it helps their case not to and could even constrain them out of a job). You're still expecting to have the right n number of bits worth of structure on hand just because. Also, the chance for continuity of a self-replicating mechanism is calculated the same as the chance for the discontinuity of that self-replicating mechanism in the direction of increasing functionality? Well, multiplied with some coefficient if you're just assuming upward evolution happens, and at a rate you can draw a line through. But when upward evolution happening is at issue, your argument is circular. LocalMinimum
Allan Keith at #160:
But they are still “targets”. As such, they are pre-defined whether you admit it or not.
They are "pre-existing", not "pre-defined". And I have not only admitted that idea: I have definitely defended it! The existence of complex configurations that allow the existence of ATP synthase is a consequence of biochemical laws. In principle, such a mcachine couls simply be impossible. But that is not the case. It can be built. But of course you need a lot of specific crafting to get it. Not all machines are possible. We can conceive of a machine that allows us to go back in time. Maybe it is possible, maybe it isn't. But if we observe one, working, then we know that it is possible. We know that it is a real target. And if we see that it needs complex hardware to work, we know that it is a real target that is functionally complex. You say:
Scenario two is obviously a fallacy. Scenario one is not. But the arguments often used by ID are to look at an extant structure and calculate the probability of it arising randomly, as if it arose in one step as a fully formed structure. Which no biologist is suggesting.
And neither is any IDist suggesting that! Again the same error. It is not important at all if it arose in one step or in 1000 successive steps. The point is that, if the function is complex, it will not work unitl its specific bits are all there. They can arrive there in steps or not, however you like. The simple point is that, until they are there, the function is not there. And therefore, it cannot be selected. You cannot select something that does not exist. You say:
Looking at the flagellum and trying to calculate the probability of it arising through known evolutionary mechanisms from some ancient starting point is improper use of probability.
Only if you do that improperly. In principle, it's perfectly feasible. And however, the flagellum is about IC, and the computation is more difficult. Let's stick to the alpha and beta chains of ATP synthase, OK? You say:
A proper use of probability would be to start from the same starting point and calculate the probability of any structure of equal complexity evolving through known evolutionary mechanisms.
First, you always forget: any structure of equal complexity that implements a naturally selectable function. I have discussed this objection both in the OP and in the thread. Please, see KF's comment at #89, and my comment at #90 (first part), that I quote here for your convenience:
You are perfectly right: the important point is not the absence of other needles (that in principle cannot be excluded, and in many cases can be proved), but the fact that they are still needles in a haystack. IOWs, the existence of alternative complex solutions does not have any relevant effect on the computation of the improbability of one individual needle. It’s the functional specificity of each individual needle that counts. That’s what I have tried to argue with my discourse about time measuring devices. Evoking only ridiculous answers from DNA_Jock, who probably really believes that the existence of water clocks and candle clocks makes the design inference for a watch a TSS fallacy! Indeed, he seems so certain that we are “painting” the function of measuring time around the random object that is our watch! Any solution that is highly specific is designed. We have absolutely no counter-examples in the whole known universe.
IOWs, to infer design for a watch, there is no need at all to consider all other possible machines of similar complexity that could exist, not even of those that could in principle measure time: we just need to measure the complexity of the watch, and recognize that it is simply too big to be compatible with any random origin. We are not discussing small improbabilities here. We are discussing extreme improbabilities, beyond any possible doubt. You say:
Frankly, I have no idea how this probability could be calculated, but that is what would have to be done to conclude that something we see today is too improbable to have happened in an’ undirected’ fashion.
Wrong. see before. You don't understand that we are discussing an empirical inference here, an inference to the best explanation, that is completely warranted in this case. There is no need to compute the exact probability. We just have to realize that the design explantion is the best explanation, and that the idea that an unimaginable number of complex solutions do exist, against any reasonable or empirical support for the idea itself, is just ad hoc resoning motivated by faith and ideology. Se also my comment #122 here. You say:
An analogy would be to start at the starting point of your ancient Roman ancestor (I am assuming that you are Italian).
I am. You say:
From that point, what is the probability that you, with your unique DNA sequence, would exist on April 20, 2018? Given all of the things that would have had to happen over the thousands of years for this to occur, the probability would be astronomically small. Yet, here you are. A proper use of probability, more akin to what happens with evolution, would be to start from the same starting point and estimate the probability that your ancestor would have a living descendant on April 20, 2018. This probability, obviously, is much higher. Not 1, but close to it.
No! Not the infamous deck of card fallacy again! See #35, #52, and especially #859 in the Ubiquitin thread, in answer to you! This is not "a proper use of probability". It's a silly use of probability, and the "argument" is a silly fallacy. If you are sincere, please consider carefully my arguments in my previous answer to you about that in the Ubiquitin thread. If you are only joking, do as you like. gpuccio
uncommon_avles: "Then how does it happen? I mean how would an external entity do it?" In the same way tha we design our artifacts: the conscious intelligent designer inputs specific configurations into the object. The main possible mechanism is guided variation. Transposons, IMO, are a good candidate as design tools. gpuccio
Origenes at #161: It is a very good paper. If you read it carefully, you will see that it uses all the ID concepts, and it uses them correctly. You say: "Of course, ID-proponents, like Gpuccio and Behe, have pointed out, often, that such is within the reach of natural selection. So, the paper may not be relevant to ID." It is relevant. Because it shows that ID concepts are correct, and that they can be applied correctly in experiments. Of course, the results are in perfect accord with ID theory. Simple results, that are in the range of RV + NS, can be definitely achieved by RV + NS. This is a very important point. Another very good point is that the authors reach well described results in their experiment, and then they compare those results to a good computational anaysis of the search space and target space. And the two kinds of results are perfectly compatible. I like very much this procedure. Of course, the result here is very simple, from the point of view of functional complexity. The "new" function (again, a function retrieval, the retrieval of the promoter) has a complexity of 2 bits (a single nucleotide substitution). And the optimized function is reache by one additional 2 bits mutation. you say: "What baffles me is that “10% of random sequences can serve as active promoters” and for many others (60%) this function can “typically be achieved by a single mutation”. How can this be? Are promoters so simple that any ol’ sequence will do?" Yes, they are, according to these results. However, it was already known that promoters are rather simple, even if this is probably the first accurate measure of how simple they are. However, as explained very well in the paper, the important functional element is the correct balance between useful and deleterious promoters, the "trade-off" well discussed in the paper. You say: "Well, not according to the same paper: “The Escherichia coli promoter represents a complex sequence feature as it consists of different elements that act together to transcribe a gene. The RNA polymerase requires particular sequence elements for binding, and additional features, such as transcription factors and small ligands can further affect its activity.” Functional stuff, therefore, but how do we square this with a “short mutational distance” from any random sequence of 103 bases long?" It's not so difficult. If you read the initial description of the promoter sequence, you will see that the funtional nucleotides are only a few. It's "complex", but not so much. Moreover, we are discussing nucleotides here, not AAs. The alphabet is base four. Each position is 2 bits. Moreover, even those functional elements are not extremely specific, as shown by the results. Therefore, the whole functional complexity of a single promoter of this type is probably very low, maybe about 20 bits or less. That is completely in the range of RV, and the following optimization by 1 additional mutation is of course completely in the range of NS. An important question here is: how do these results relate to the many times quoted paper "Waiting for two mutations"? Which was about a similar problem. Even if I have not done the math in detail, I think they are in perfect accord. The main difference is that the "Waiting for two mutations" paper is about what should happen in a natural setting. It models not only the probability for the mutations, but also the probabilites of fixation. That is very important. Please, see also my comment #144, the final part. In this paper, fixation is not considered. They only look at the appearance, and further optimization, of the function. Which is perfectly fine, given the purposes of the paper. But it also explains the differences with the other paper. gpuccio
uncommon alves:
A Slime Mold doesn’t have brain or nervous system thus it is entirely controlled by environmental factors.
That doesn't follow. A slime mold is made up of organisms- each an intelligent agency in their own right. They sense their environment and act accordingly. ET
uncommon alves:
A complex cell structure would have shown signs of complex organisms million of years ago !
Cuz you say so?
You need to get over this idea of ID agent creating everything by frontloading data and processes, if you want to understand science.
You don't understand science and you don't understand front-loading
Please read the Nature’s link which was given earlier to understand how the slime mold worked.
Evolutionism cannot account for the existence of slime molds. ET
ET @ 159 A complex cell structure would have shown signs of complex organisms million of years ago ! We would be the dumbest organism if devolution happened. You need to get over this idea of ID agent creating everything by frontloading data and processes, if you want to understand science.
Where did you get the slime mold from- Walmart?... Cuz you say so? Really?
No. because REAL scientists carried out experiments, instead of just speculating about agents scurrying around and hurrying up 'complex processes'. Please read the Nature's link which was given earlier to understand how the slime mold worked. gpuccio @ 155 No offence but this is exactly what I was referring to when I said 'pseudo science'. A Slime Mold doesn't have brain or nervous system thus it is entirely controlled by environmental factors. The biological processes are dependent on the environment. The metrics presented by you doesn't make it any more 'intelligent' than they are. By putting up 'bits' metric you are just trying to project series of purely physical process as something which needs intelligence. gpuccio @ 156
To change a sequence (functional or not) into the specific functional sequence of the beta chain of ATP synthase, or into the specific functional sequence of Prp8, is empirically impossible.
Then how does it happen? I mean how would an external entity do it? uncommon_avles
Allan- I don't care about statisticians. They cannot help evolutionism. ET
LarTanner- I don't care about probability arguments for the simple reason is evolutionism doesn't deserve a seat at that table. Evolutionists can't figure put how to test their claims and that is more than enough to understand they have nothing. No one knows how to test the claim that undirected processes produced any bacterial flagellum. And given the paper waiting for two mutations it is clear that there isn’t enough time in the universe for undirected processes to do such a thing. You lose ET
Nonlin.org at #139 and #157: a) I always use a very explicit and clear definition of functional information. See here: Functional information defined https://uncommondescent.com/intelligent-design/functional-information-defined/ Functional information is complex if it is beyond some appropriate threshold for the system. For a general system, 500 bits is appropriate as an universal threshold (as in Dembski). b) Again, you equal design with law. That is not correct, certainly not with our use of those terms. You cay:
Yes, “The content of design in unpredictable, because it depends on the desires and cognitive abilities of the designer”, but the only way you can label something “designed” is to see that it is non-random, i.e. it follows certain rules – those imposed by the designer. In other words, design = regularity.
No. The results of natural laws are regularities, but those results are not designed. The laws could be designed, but this is a cosmological argument. The biological argument of ID detects design inside the universe, not design of the universe. Inside the universe, the results of laws are not designed, because, once the laws exist, there is no need for any conscious intervention for them to operate. c) You conflate and confound different levels and kinds of functional information. You say:
Example: Paley’s watch will have regular shapes and uniform materials that look different than a random pile of matter.
But that is not the reason why we infer design for the watch. We can, at most, infer design for the parts, from that reasoning. We infer design for the watch for the specific configuration of parts that implement the function of measuring time. The individual parts, even if regular, would not allow any measure of time for the simple fact that they are regular. The function derives from the specific configuration of parts that implements the working machine, and that is not a regularity, but a functional specificity. d) You confound random configurations with designed objects. You say:
You look at a sand dune or a sand garden – close-up it’s just “random” grains of sand, but wide-angle you see patterns that beg for an explanation.
And the explanation is simple: those are patterns that are well explained by the action of weather and similar laws. They require no conscious intelligent design. Only the operation of existing laws on an existing system. No design here. e) You confound non detectable design with absence of design. You say:
Can someone design a sand garden to look like a naturally occurring sand dune? Sure, and they’re indistinguishable (because they’re both designed if you ask me)!
The definition of design is any process where conscious representations are the source for the form outputted to matter by the designer. A naturally occurring sand dune is not designed by any consious designer, unless you argue that everything that exists is designed, exactly as it is, by God. But again, that's a philosophical argument, true or false that it may be. It is not a scientific argument. From a scientific point of view, we have objects that have been designed, because a conscious agent gave them the form they have, in time and space, starting from his conscious representations, and objects that are not designed, because that process never happened (at least in time and space). A designed thing can be undistinguishable from a non designed thing. If I design dunes so that they appear like natural dunes, and if I am good at it, nobody will be able to detect desoign from the result. But if somebody sees the process, design can be still affirmed. A lot of designed objects are such that we cannot detect design in them. The usual reason is that they are too simple, even if designed. We cannot infer design for simple configurations, even if the objects are really designed. These are the false negatives of desing inference, and there are a lot of them. Another possible reason is that the designed object, even if complex, is intentionally designed to appear similar to a non designed object. That's the case of the dune garden. Even if designed, design is not detectable. e) I don't agree with your concepts about randomness. You say:
You say: “We must distinguish between usual randomness and quantum randomness.” – but this doesn’t make sense to me because “randomness” is ONLY a theoretical concept (like line, circle and point) – we can never determine something to be “random” – again, see: http://nonlin.org/random-abuse/ . Also, what we call “random” is never completely undetermined – all such phenomena have a deterministic element – at a minimum their statistical distribution and boundaries (no six face die will ever come up seven).
Again, you use randomness as though it were a property of objects, to affirm that it is not (which is true), and then say that it is "only a concept" because we cannot determine if something is random (IOWs, again a property of objects). Have you read my comment #112? "Usual randomness just means that there is some system whose evolution is completely deterministic, but we can’t really describe its evolution in terms of necessity, because there are too many variables, or we simply don’t know everything that is implied. In some cases, such a system can be described with some success using an appropriate probability function. Probability functions are well defined mathematical objects, which can be useful in describing some real systems. A probabilistic description is certainly less precise then a necessity description, but when the second is not available, the first is the best we can do. A lot of empirical science uses succesfully probabilistic tools." IOWs, randomness is simply our way to describe a deterministic system by a probability function. Therefore, all your reasonings about it being or not being a property of the objects are wrong. It is a property of our type of scientific description. In the described systems, everything is deterministic, but our description of the configurations is probabilistic. You say: "what we call “random” is never completely undetermined – all such phenomena have a deterministic element" But that makes no sense. All phenomena that we describe as random are completely deterministic. They don't have "a deterministic element". They are completely deterministic. (Except for quantum events). Of course we choose the probabilistic model so that it models correctly the system. Of course if a die has six possible configurations, we choose a distribution with six levels. For a coin, we chose a distribution with two levels. This is not "a deterministic element". It is only a good way of choosing models. the determonostic elements in tossing a die or a coin are the laws of mechanichs, which determine exactly the result of each single event. But we cannot compute those results because we don't know all the variables. Therefore, we descirbe the configurations by a probability distribution. And we can get very good results in that way. f) You don't understand the difference between design and design detection. You say:
You say: “An outcome that is non random is not necessarily designed.” How so? Provide example. If you think the sand dune is determined by “natural forces” and the “laws of physics”, then how do you know that it’s not ultimately designed?
If you put objects of different density in water, some will float, some will go down, according to the density. This outcome is not random. And it is not designed. Again, I am not debating is natural laws are designed or not. That is a different issue. But, given natural laws, no conscious intelligent agent is acting on those objects to make them float or go down. The outcome is not random, and it is not designed. It is determoinistic, and it is simple enough so that we can describe it by necessity laws (the objects'density, and water's density), without any need for a probabilistic description, which would not be equally precise. Regarding the dune, I will not infer design for it, because it has no complex functional information. If it was designed that way, I get a false negative. As explained, the design inference is made using extremely high thresholds of complexity (for example, 500 bits). The purpose for that is to have empirically no false positives, but the consequence is that we have a lot of false negatives. If you are familiar with the trade-off between sensitivity and specificity, you will understand that point. Here we need specificity, and we happily renounce sensitivity. Our purpose is to detect design correctly and safely in some objects, not to detect all designed objects. g) You apparently don't understand the purpose of ID theory in biology. You say:
Me: “Probabilities of randomness in biology are ridiculously low, therefore not even worth seriously discussing.” You: “Well, for me they are worth of a very serious discussion. Exactly because they “get extreme very quickly”” Your reply makes no sense whatsoever. Can you explain?
Yes, it makes very good sense. The probabilities of observing objects exhibiting functional complexity as a random result is some non design system “get extreme very quickly”, indeed exponentially, with the increase in the observed functional information. That's what allows a safe design detection after some appropriate threshold is reached. Again, 500 bits for the general case. Therefore, those probabilities are "worth of a very serious discussion", because they allow us to detect design in biological objects. I hope this answers your points. gpuccio
Nonlin @ 158: Actually, it is you who is completely lost. Not very impressive... even by a/mat standards. Truth Will Set You Free
#163-
That is your opinion. And seeing tat (sic) you are not an authority no one will listen.
And there go 95 percent of the OPs and comments on UD. LarTanner
ET,
That is your opinion.
And that of statisticians. Allan Keith
Allan:
Looking at the flagellum and trying to calculate the probability of it arising through known evolutionary mechanisms from some ancient starting point is improper use of probability.
That is your opinion. And seeing tat you are not an authority no one will listen. And your equivocation is also duly noted. No one knows how to test the claim that undirected processes produced any bacterial flagellum. And given the paper waiting for two mutations it is clear that there isn't enough time in the universe for undirected processes to do such a thing. ET
Nonlin @158
Nonlin: “It’s consistent” means absolutely nothing. Fact is, you cannot say FOR SURE.
This is where you go wrong. Given a large enough set, if some result is consistent with a random production, then this obviously MEANS (yes it does mean something) that we cannot exclude the possibility of random production — even though it does not provide a basis to be sure. On the other hand, given a large enough set, if a result is not consistent with a random production, then it tells us also something. Question to Nonlin: what can that be? Origenes
At TSZ, there is interest in the following paper: Random sequences rapidly evolve into de novo promoters, by A.H.Yona et al. The text contains optimistic passages: “These features make promoter evolution a promising avenue to consider how complex features can evolve.” and “Following these, the evolving populations highlighted that new promoters can often emerge directly by mutations, and not necessarily by genome rearrangements that copy an existing promoter. Substantial promoter activity can typically be achieved by a single mutation in a 100-base sequence, and can be further increased in a stepwise manner by additional mutations that improve similarity to canonical promoter elements.” The paper is about “short mutational distances” — it speaks of “only one mutation” and “substantial promoter activity can typically be achieved by a single mutation in a 100-base sequence …” Of course, ID-proponents, like Gpuccio and Behe, have pointed out, often, that such is within the reach of natural selection. So, the paper may not be relevant to ID. What baffles me is that “10% of random sequences can serve as active promoters” and for many others (60%) this function can “typically be achieved by a single mutation”. How can this be? Are promoters so simple that any ol’ sequence will do? Well, not according to the same paper: “The Escherichia coli promoter represents a complex sequence feature as it consists of different elements that act together to transcribe a gene. The RNA polymerase requires particular sequence elements for binding, and additional features, such as transcription factors and small ligands can further affect its activity.” Functional stuff, therefore, but how do we square this with a “short mutational distance” from any random sequence of 103 bases long? Origenes
Gpuccio,
One important point is that this reasoning is about identifying correctly the targets, and not about computing the probabilities. Once we confirm that our targets are real targets, valid targets, then we can compute the probabilities. And decide if we can infer design (or aiming).
But they are still "targets". As such, they are pre-defined whether you admit it or not. Scenario two is obviously a fallacy. Scenario one is not. But the arguments often used by ID are to look at an extant structure and calculate the probability of it arising randomly, as if it arose in one step as a fully formed structure. Which no biologist is suggesting. Looking at the flagellum and trying to calculate the probability of it arising through known evolutionary mechanisms from some ancient starting point is improper use of probability. A proper use of probability would be to start from the same starting point and calculate the probability of any structure of equal complexity evolving through known evolutionary mechanisms. Frankly, I have no idea how this probability could be calculated, but that is what would have to be done to conclude that something we see today is too improbable to have happened in an' undirected' fashion. An analogy would be to start at the starting point of your ancient Roman ancestor (I am assuming that you are Italian). From that point, what is the probability that you, with your unique DNA sequence, would exist on April 20, 2018? Given all of the things that would have had to happen over the thousands of years for this to occur, the probability would be astronomically small. Yet, here you are. A proper use of probability, more akin to what happens with evolution, would be to start from the same starting point and estimate the probability that your ancestor would have a living descendant on April 20, 2018. This probability, obviously, is much higher. Not 1, but close to it. Allan Keith
uncommon alves:
Well, the initial cells weren’t complex.
Evidence please.
When you look at the Slime Mold making ‘decisions’, it seems complex because you haven’t considered the chemo locomotion and pulses of plasmodium flow due to ‘target’ (oatmeal of different concentrations) concentrations.
Where did you get the slime mold from- Walmart?
These are entirely environment based process.
Cuz you say so? Really? ET
Origenes@143 Nonlin: You still don’t get it. I give you a 10000 trials as follows 101010…10. Can you say it’s “random”?
1. Yes of course. If, after 10.000 trials, we have 50% “1”, then this is consistent with 1 and 0 production being random. Why is this so difficult for you? 2. Question for Nonlin: if, after 10.000 trials, the outcome is 10% 1 (and 90% 0), what does that tell you about the “randomness” by which 1s and 0s are being produced? According to your claim “nothing”. Do you now understand that this is wrong?
You're completely lost. 1. "It's consistent" means absolutely nothing. Fact is, you cannot say FOR SURE. As you know, I designed that sequence, so no, it's not random. I also designed a sequence that incorporates randomness: "flip coin and then reverse output for the other 9999 outputs". 2. Yes, "nothing" is the right answer - not from the outcome. Remember it's a black box - you don't know the stats - it might be a 10-face die with one 1 and nine 0. It can also be a loaded coin or even a fair coin and your trial is just one of many trials (hand picked or freak outcome). Of course, if you already know the system, you don't learn anything new from one set of outputs, so the answer is still "nothing". Now you get it? Nonlin.org
gpuccio@112 Me: "Probabilities of randomness in biology are ridiculously low, therefore not even worth seriously discussing." You: "Well, for me they are worth of a very serious discussion. Exactly because they “get extreme very quickly”" Your reply makes no sense whatsoever. Can you explain? Also see Nonlin@139 Nonlin.org
uncommon_avles #154 (continued): c) You say: I thought the argument always was it is difficult to change the sequence completely? So if even 4,000 sequence takes just 35,485 mutations 4000 Log(4000)+ Euler–Mascheroni constant x 4000 +(1/2) wouldn’t you agree that changing sequence completely is not difficult and you don’t need ID agent? I am always amazed at how little our interlocutors understand ID. This is a good example. The argument never was that it is difficult to change the sequence completely. It is not difficult at all, as I have clearly explained. The argument, of course, is that it is extremely difficult, empirically impossible, to find complex functional sequences in the ocean of non functional possible sequences, by random variation. To change a sequence (functional or not) into another non functional sequence is extremely easy. To change a sequence (functional or not) into the specific functional sequence of the beta chain of ATP synthase, or into the specific functional sequence of Prp8, is empirically impossible. Do you understand? gpuccio
uncommon_avles #154: Your posts are very useful indeed, because they are a good repository of commom eoors of thought. I will answer those that are, in some way, new to this thread: a) You say:
Well, the initial cells weren’t complex.
??? What "initial cells"? Examples, please. Possibly not mere fairy tales. facts. b) You say:
(No idea how to calculated bits for slime mold . I am assuming it is above 500 bit based on gpuccio’s other examples. If not, let me know how you would calculate bits for this.)
Slime molds are a polyphiletic group. I will give here some information about the genome of Dictyostelium discoideum, the model organism for cellular slime molds. First of all, what is it? From Wikipedia: "Dictyostelium discoideum is a species of soil-living amoeba belonging to the phylum Amoebozoa, infraphylum Mycetozoa. Commonly referred to as slime mold, D. discoideum is a eukaryote that transitions from a collection of unicellular amoebae into a multicellular slug and then into a fruiting body within its lifetime. Its unique asexual lifecycle consists of four stages: vegetative, aggregation, migration, and culmination. " So, it is an eukaryote. Not a simple organism at all. The genome: Genome size: 34 Mb Chromosomes: 6 Number of protein-coding genes: 12257 (Humans: 20000) Mean protein size: 580 AAs (Humans: 561) Number of genes with introns: 7996 All data from: http://dictybase.org/Dicty_Info/genome_statistics.html An organism "without brain or nervous system"? OK, but: Using the social amoeba Dictyostelium to study the functions of proteins linked to neuronal ceroid lipofuscinosis https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5122030/
Abstract Neuronal ceroid lipofuscinosis (NCL), also known as Batten disease, is a debilitating neurological disorder that affects both children and adults. Thirteen genetically distinct genes have been identified that when mutated, result in abnormal lysosomal function and an excessive accumulation of ceroid lipofuscin in neurons, as well as other cell types outside of the central nervous system. The NCL family of proteins is comprised of lysosomal enzymes (PPT1/CLN1, TPP1/CLN2, CTSD/CLN10, CTSF/CLN13), proteins that peripherally associate with membranes (DNAJC5/CLN4, KCTD7/CLN14), a soluble lysosomal protein (CLN5), a protein present in the secretory pathway (PGRN/CLN11), and several proteins that display different subcellular localizations (CLN3, CLN6, MFSD8/CLN7, CLN8, ATP13A2/CLN12). Unfortunately, the precise functions of many of the NCL proteins are still unclear, which has made targeted therapy development challenging. The social amoeba Dictyostelium discoideum has emerged as an excellent model system for studying the normal functions of proteins linked to human neurological disorders. Intriguingly, the genome of this eukaryotic soil microbe encodes homologs of 11 of the 13 known genes linked to NCL. The genetic tractability of the organism, combined with its unique life cycle, makes Dictyostelium an attractive model system for studying the functions of NCL proteins. Moreover, the ability of human NCL proteins to rescue gene-deficiency phenotypes in Dictyostelium suggests that the biological pathways regulating NCL protein function are likely conserved from Dictyostelium to human. In this review, I will discuss each of the NCL homologs in Dictyostelium in turn and describe how future studies can exploit the advantages of the system by testing new hypotheses that may ultimately lead to effective therapy options for this devastating and currently untreatable neurological disorder. --- Dictyostelium as a model system for studying human neurological disorders The social amoeba Dictyostelium discoideum is a fascinating microbe that has emerged as a valuable model organism for biomedical and human disease research. This model eukaryote, which has historically been used to study basic cell function and multicellular development, undergoes a 24-h asexual life cycle comprised of both single-cell and multicellular phases [1] (Fig. 1). As a result, it is an excellent system for studying a variety of cellular and developmental processes, including lysosome function and intracellular trafficking and signalling [2, 3]. In nature, Dictyostelium feeds and grows as single cells (Fig. 1). When prompted by starvation, cells undergo chemotactic aggregation towards cAMP to form a multicellular aggregate (i.e., a mound), which then undergoes a series of morphological changes to form a motile multicellular pseudoplasmodium, also referred to as a slug (Fig. 1). Cells within the slug then terminally differentiate into either stalk or spore to form a fruiting body [4] (Fig. 1). Unlike immortalized mammalian cells that have been removed from their respective tissues, Dictyostelium represents a true organism in the cellular state that retains all of its dynamic physiological processes. Moreover, the cellular processes and signalling pathways that regulate the behaviour of Dictyostelium cells are remarkably similar to those observed in metazoan cells, indicating that findings from Dictyostelium are highly likely to be translatable to more complex eukaryotic systems [5]. Dictyostelium is recognized as an excellent model system for studying human neurological disorders, including epilepsy, lissencephaly, Parkinson’s disease, Alzheimer’s disease, and Huntington’s disease [6–10].
Emphasis mine. An example of functional complexity in this "simple" eukaryotic cell? Of course, there are thousands. Just one for all. If you have read my previous OP abou the spliceosome: The spliceosome: a molecular machine that defies any non-design explanation. https://uncommondescent.com/intelligent-design/the-spliceosome-a-molecular-machine-that-defies-any-non-design-explanation/ you will find an entire section about a long, complex and extremely conserved (in eukaryotes) protein, a highly functional component of the splieosome system: Prp8. Is that protein present in our "simple" slime mold? Of course it is. How much human conserved functional information does it show in the slime mold? And the answer is: 3805 bits! (79% identity) I think that's enough. gpuccio
ET @ 124 , bill cole @126, Well, the initial cells weren’t complex. The structures became ‘complex’ over time by aggregation of processes – chemical concentrations, ion exchanges, structural agglomerations etc. When you look at the Slime Mold making ‘decisions’, it seems complex because you haven’t considered the chemo locomotion and pulses of plasmodium flow due to ‘target’ (oatmeal of different concentrations) concentrations. These are entirely environment based process. There is no need of any ID agent despite ‘500’ bit ID complexity (No idea how to calculated bits for slime mold . I am assuming it is above 500 bit based on gpuccio's other examples. If not, let me know how you would calculate bits for this.) Origenes @ 148
And, again, if we observe, post hoc of course, that only green bricks are hit, we have support for the idea that the shooter has seen those green bricks and has shot with aim.
..or the bullets were smart bullets seeking green color - akin to process guided by chemical, ionic , strutural or other physical processses as in the Slime mold example I gave earlier. gpuccio @ 150
As you can see, the expected number varies from 2 to 5 times the sequence length, for sequences approximately between 2 and 100. Even for a sequence of 1000, the expected number is about 6-7 times the sequence length.
I thought the argument always was it is difficult to change the sequence completely? So if even 4,000 sequence takes just 35,485 mutations 4000 Log(4000)+ Euler–Mascheroni constant x 4000 +(1/2) wouldn't you agree that changing sequence completely is not difficult and you don't need ID agent? uncommon_avles
DATCG, That’s a very interesting paper. Thanks. Here’s a paper citing the one you quoted. https://doi.org/10.1080/15476286.2017.1403717 OLV
Gpuccio, Came across a paper several weeks ago. Hesitated to add it to your Ubiquitin, Semiosis OP. But hope it's OK to drop it here as I think it adds significance to your case on how life gets more difficult by the day for neo-darwinism, random mutations and natural selection... Case for the genetic code as a triplet of triplets Fabienne F. V. Chevance and Kelly T. Hughes PNAS April 17, 2017 http://www.pnas.org/content/early/2017/04/14/1614896114 Significance
The genetic code for life is a triplet base code. It is known that adjacent codons can influence translation of a given codon and that codon pair biases occur throughout nature. We show that mRNA translation at a given codon can be affected by the two previous codons. Data presented here support a model in which the evolutionary selection pressure on a single codon is over five successive codons, including synonymous codons. This work provides a foundation for the interpretation of how single DNA base changes might affect translation over multiple codons and should be considered in the characterization of the effects of DNA base changes on human disease.
Abstract
The efficiency of codon translation in vivo is controlled by many factors, including codon context. At a site early in the Salmonella flgM gene, the effects on translation of replacing codons Thr6 and Pro8 of flgM with synonymous alternates produced a 600-fold range in FlgM activity. Synonymous changes at Thr6 and Leu9 resulted in a twofold range in FlgM activity. The level of FlgM activity produced by any codon arrangement was directly proportional to the degree of in vivo ribosome stalling at synonymous codons. Synonymous codon suppressors that corrected the effect of a translation-defective synonymous flgM allele were restricted to two codons flanking the translation-defective codon. The various codon arrangements had no apparent effects on flgM mRNA stability or predicted mRNA secondary structures. Our data suggest that efficient mRNA translation is determined by a triplet-of-triplet genetic code. That is, the efficiency of translating a particular codon is influenced by the nature of the immediately adjacent flanking codons. A model explains these codon-context effects by suggesting that codon recognition by elongation factor-bound aminoacyl-tRNA is initiated by hydrogen bond interactions between the first two nucleotides of the codon and anticodon and then is stabilized by base-stacking energy over three successive codons.
Interesting....
Changing the codon on one side of the defective codon resulted in a 10-fold increase in FlgM protein activity. Changing the codon on the other side resulted in a 20-fold decrease. And the two changes together produced a 35-fold increase. “We realized that these two codons, although separated by a codon, were talking to each other,” Hughes says. “The effective code might be a triplet of triplets.”
Natural Selection, gets weaker...
The difficulty for natural selection would be in finding codon optimization for a given gene. If the speed through a codon is dependent on the 5? and 3? flanking codons, and the flanking codons are dependent on their 5? and 3? flanking codons, then selection pressure on a single codon is exerted over five successive codons, which represent 615 or 844,596,301 codon combinations.
To keep this in perspective, remember the Spliceosome and One Gene -> Many Proteins.
If modified tRNAs interact with bases in a codon context-dependent manner that differs among species depending on differences in tRNA modifications, ribosome sequences, and ribosomal and translation factor proteins, it is easy to understand why many genes are poorly expressed in heterologous expression systems in which codon use is the primary factor in the design of coding sequences for foreign protein expression. The potential impact of differences in tRNA modifications represents a significant challenge in designing genes for maximal expression whether by natural selection or in the laboratory.
Yep... more specificity matters....
The tRNA molecules of every organism are modified extensively, and the majority of modifications occur at the antiwobble position of the anticodon loop and at the base immediately 3? to the anticodon (18). [Thirteen other bases positions are modified to a lesser extent in tRNA species of E. coli and Salmonella enterica (7).] The base adjacent to the 3? anticodon position, the “cardinal nucleotide,” also varies among species and is thought to affect codon recognition significantly (19). These modifications influence the stacking energy of the bases during codon–anticodon pairing (3). The translation proofreading steps catalyzed by EF-Tu and EF-G, which “sense” hydrogen bonding and stacking energy to determine if the correct codon–anticodon pairing has occurred, are influenced by the adjacent codons, possibly resulting in the codon-context effects we observe. Moreover, many tRNA-modifying proteins are present in only one of the three kingdoms of life (1). Thus, specific tRNA modifications that affect wobble base recognition and contribute to the base-stacking forces during translation can determine specific codon-context effects by adjacent synonymous codons on specific codon translation. Such effects of specific tRNA modifications on codon translation could account for the different codon pair biases observed in species that are evolutionarily distant (possessing different specific tRNA modifications) and also could account for the difficulty in expressing proteins in heterologous systems, i.e., expressing proteins from plant and mammalian systems in bacteria. The MiaA (i6A37) modification has recently been shown to affect mRNA translation in E. coli in a codon context-dependent manner, supporting our overall hypothesis (20). The translation of proline codons in the mgtL peptide transcript of Salmonella was recently shown to be affected by mutations defective in ribosomal proteins L27, L31, elongation factor EF-P, and TrmD, which catalyzes the m1G37 methylation of proline tRNA (21). Modification of tRNA species in E. coli also has been shown to vary with the growth phase of the cell (22). Specific codon-context effects could represent translation domains of life based on tRNA modifications.
This triplet of triplets puts more constraints on the system. Where does this leave neo-Darwinism? and... "higher order genetic codes..."
The tRNA modifications vary throughout the three kingdoms of life (3) and could affect codon–anticodon pairing. The differences in tRNA modifications could account for differences in synonymous codon biases and for the effects of codon context (the ability to translate specific triplet bases relative to specific neighboring codons) on translation among different species. Here, using in vivo genetic systems of Salmonella, we demonstrate that the translation of a specific codon depends on the nature of the codons flanking both the 5? and 3? sides of the translated codon, thus generating higher-order genetic codes for proteins that can include codon pairs and codon triplets. The effect of the flanking codons on the translation of a specific codon varies from insignificant to profound. It has been known for decades that highly expressed genes use highly biased codon pairs, which can vary from one species to the next. The speed of translation depends heavily on flanking codons (4).
So cool. Regulatory factors surrounding other regulatory factors, surrounding a higher code, above the code ;-) I came across this while searching another topic on genetic code. hat tip: https://evolutionnews.org/2017/04/genetic-code-complexity-just-tripled/ DATCG
Gpuccio, once again, great OP. Thanks for your efforts in providing answers and/or rebuttals to different questions and opposing points. Enjoyed reading the OP and comments. And thanks for reviewing/rebutting ye olde Deck of Cards Fallacy as well. Are neo-Darwinist using Deck of Card fallacy, missing what Organized Specificity is: The Three Subsets of Functional Sequence Complexity(FSC). Or intentionally misleading others who may not understand the slight of hand? I realize some might think it's a valid argument. Are they not familiar with FSC? Or, make the mistake of equating Random Sequence Complexity(RSC) with FSC? It seems they're forced into making a case of the absurd through a random series of card tricks. But randomness: RSC not equal to FSC - Function. I think neo-darwinism has been/is on a fast track to nowhere. DATCG
bill cole: I have checked with simulations that the number of attempts expected to change all the sites in a sequence by random mutations, if all the sites have the same probability of mutation, is well described by the Coupon collector's problem: https://en.wikipedia.org/wiki/Coupon_collector%27s_problem See also the graph in the Wikipedia page. As you can see, the expected number varies from 2 to 5 times the sequence length, for sequences approximately between 2 and 100. Even for a sequence of 1000, the expected number is about 6-7 times the sequence length. So, in all cases, as already said: "It’s an infinitesimal chunk" gpuccio
Origenes: Oh, yes! I did not realize that Allan Keith was referring to the shooter. Of course we don't know anything about what the shooter sees or does, we must infer that from the outcome. Thank you for clarifying that! :) gpuccio
GPuccio @147 There is a first time for everything :) — we are talking past each other.
GP: Only in the first we can look at the wall before the shootinh. That’s how we know that the targets are alredy on the wall: it’s a case of pre-specification. In the other two scenarios, we cannot look at the wall before the shooting: both are cases of post-specification.
My concern is not about the "we" in your story. My point is that we do not make a priori assumptions about what the shooter sees or not sees. Whether we look before or after the shooting or not, is irrelevant for what the shooter sees. And, again, if we observe, post hoc of course, that only green bricks are hit, we have support for the idea that the shooter has seen those green bricks and has shot with aim. Origenes
Origenes: Of course, the outcome and its probability in the system are the basis for the design inference. However, in my OP I have described three different scenarios. Only in the first we can look at the wall before the shootinh. That's how we know that the targets are alredy on the wall: it's a case of pre-specification. In the other two scenarios, we cannot look at the wall before the shooting: both are cases of post-specification. However, the difference is that in the second scenario the targets are painted on the wall afetr the shooting: that violoates noth the rules for a correct post-specification. The targets are not objectively part of the wall, and to paint them we need to use the contingent information in the outcome (we have to know where each bullet is, to paint the target around it). In the thord scenario, we just acknowledge that some specific tragets that are objectively part of the wall have been hit. We did not know that before observing the outcome. We understand the target observing the outcome, but both the rules for a correct post-specification are satisfied: the targets are an objective part of the wall, and we are no using the contingent information about the shots to paint anything. The purpose of the reasoning is to show that pos-specifications, if they respect those two tules, are as valid as pre-specifications. One important point is that this reasoning is about identifying correctly the targets, and not about computing the probabilities. Once we confirm that our targets are real targets, valid targets, then we can compute the probabilities. And decide if we can infer design (or aiming). For example, in the second scenario we could still compute probabilities for our painted targets, and infer design, but we would be wrong, not because our compuattion is wrong, but because our targets are not real targets. But in the third scenario, targets are good, and the computation is as valid as in the first scenario. Again, the example is not about the validity of the computation, but about the validity of the targets. My second point instead: "The objection of the different possible levels of function definition." is more connected to the computation itself, and shows that it must consider the upper tail for the observed effect. gpuccio
Allan Keith @133, GPuccio @142
Allan Keith: If I am reading this correctly (no guarantee), you are still looking at the green bricks from a post-hoc perspective. i.e., the green bricks represent specific targets, even though the shooter didn’t see them in advance.
I urge GPuccio to correct me if I am wrong, but I think Allan gets the analogy wrong here. The whole idea of the green brick analogy, as I understand it, is that there are two distinct explanations for the outcome: ‘random shooting’ and ‘aimed shooting.’ IOWs we do not make a priori assumptions about what the shooter sees or not sees, as Allan does. In fact if we see, on post hoc inspection, that the bullets have only hit green bricks, then we have support for the explanation ‘aimed shooting’ — which is, of course, the whole point. Origenes
bill cole: "Entropy agreed that the study of genetic information is an important academic endeavor and thats what you are doing so I am ok with the progress so far." OK, let me know how it evolves! :) And, please, if there are news from Joe Felsestein about that old issue. You know, the thief... :) gpuccio
bill cole: "I understand your argument and my initial take of Jock’s argument was the same. Yes, with random change more variables are required but again a tiny fraction with the sequence space." In my example the change was random just the same, but I applied it at one site per time, for simplicity. I did not want to rum a simulation, just to show that a pathway from one sequence to another is a really small set of sequences, while the search space is combinatorial, and therefore it increases hugely with the length of the sequence. Entropy and dazz seemed to be under the starnge illusion that the simple facts that all sites underwent mutations in some evlutionary time demosntrated that the search space had been traversed. That was a completly senseless idea, and I have made a simple example to make them realize that. That's all. In a bacterial system, all positions in the genome undergo mutations in a relatively short time. The general mutation rate is about 10^-9 per replication per site, especially in microrganisms. In the Lensky experiment, if I am not wrong, all possible single substitutions must have been reached easily enough. What Entropy and dazz seem not to understand is that one thing is to change each single position in a sequence (something that can be easily attained with a relatively small number of mutations) and all another thing (but really all another thing) is to reach all the possible combinatorial states of a long sequence. For a 150 AAs sequence, those states are 20^150, 1.427248e+195, 648 bits. That's more than all the states of elementary particles in the whole universe in 15 billion years! How can anyone think that “an enormous chunk of that sequence space has been explored”? That's why I said, correctly, that both Entropy and dazz do not understand combinatorics. And if they don't understand combinatorics, how can they understand ID, least of all discuss it? I think that DNA_Jock understands combinatorics, even if his statement about the "e^-n" solution remains rather obscure to me. I would be happy to understand it, if he explained it. In case, please let me know. But again, the simple fact that he does not acknowledge the blatant error in the statements made by his fellows, and just tries to find fault with what I have said, is sad evidence of his attitude in this discussion. Another point that I would like to clarify, because ot could be a source of confusion, is that we must distinguish between two different counts of mutations in the biological system: 1) The mutations that take place in alll living organisms. As said, a generally accepted mutation rate is 10^-9 per replication per site. Assuming approproate population sizes and replication times and mean genome length and evolutionary times available, it is perfectly possible to compute higher thresholds for the total number of mutations that could take place on our planet in its lifetime. That's what I have done, with extreme generosity, in my table in the OP about the limits of RV. That's what my number of 139 bits means. The total number of possible states tested on earth (higher threshold, by far). The probabilistic resources of our global system. 2) Another concept is the mutations that we observe in the proteome of organisms. Those are the mutations that have been fixed, and they are of course a tiny subset of all the mutations that have happened. Neutral mutations, in particular, are fixed by drift, a completely random process. Of corse, not all neutral mutations are fixed. And it takes time for a mutation to be fixed by drift. gpuccio
Nonlin @
Nonlin: You still don’t get it. I give you a 10000 trials as follows 101010…10. Can you say it’s “random”?
Yes of course. If, after 10.000 trials, we have 50% "1", then this is consistent with 1 and 0 production being random. Why is this so difficult for you? Question for Nonlin: if, after 10.000 trials, the outcome is 10% 1 (and 90% 0), what does that tell you about the "randomness" by which 1s and 0s are being produced? According to your claim "nothing". Do you now understand that this is wrong? Origenes
Allan Keith at #133:
But first I want to commend you on a well written and thought out OP.
Thank you.
If I am reading this correctly (no guarantee), you are still looking at the green bricks from a post-hoc perspective. i.e., the green bricks represent specific targets, even though the shooter didn’t see them in advance. From this perspective, I agree that hitting these three green bricks by taking three blind shots would be highly improbable. However, let’s add an additional one hundred green bricks (as per your description). Blindly taking three shots has a much higher probability of hitting any three green bricks than blindly hitting three pre-specified green bricks. This is a better analogy to how evolution is purported to proceed.
In the OP I have computed two different probabilities: for 100 green bricks (all of them hit), and for 50 (half hit). I don't think I have ever discussed 3 hits, but I admit that in the image there were only three, do that has probably confused you. No problem. Of course, the probability can be computed for any number of hits on any number of targets, and of course a crucial factor is the total number of bricks (the search space). The wall analogy had only one purpose, as told many times: to show that the TSS fallacy does not apply to good post-hoc specifications.
We can look at these green bricks as all of the possible “targets” that could provide some advantage.
IOWs, all naturally selectable targets. The advantage must be a reproductive advantage. But not all naturally selectable advantages are the same, of course. Antibiotic resistance gives reproductive advantage, in some conditions. But it is a simple trait. On the wall, it would be a huge green sector. It's very easy to hit it. Not so ATP synthase, for example. Or most functional proteins, some more than others. They are like microscopic green points on the wall. Impossible to shoot them from a distance. As I have said many times, if we observe a complex target in a huge search space, it's practically impossible that it has been shot by chance. That's the essence of ID.
Once hit, it is more likely to be preserved in the next generation of wall. And with every generation, three more blind shots are taken, and so on. How many generations will it take before all of the green bricks have been hit at some time throughout the generations? What I am trying to say is that three bricks do not have to be hit all at the same time. The odds improve even more if the wall has multiple offspring per generation
This is a very common error. Nobody says that the bricks have ot be hit "at the same time". As said, the analogy of the bricks is not a general model of the biological system. However, a green brick, if we want to make a generic connection between the two scenarios, could represent a complex protein. Shooting a functional island is almost impossible, because it is really extremely small as compared to the search space and to the probabilistic resources (the number of shots). However, I would not insist on the analogies between green bricks and proteins. As said, the only purpose was to state that they show the same kind of correct post-specification. You can find my treatment of the protein problem elsewhere, both in the OP and in the discussion, and in previous OPs, many times quoted. gpuccio
Origenes@137 You still don't get it. I give you a 10000 trials as follows 101010...10. Can you say it's "random"? Also, in real life you're looking at biological black boxes. You have no prior idea what the stats should be. See? That's what happens when you don't read. Nonlin.org
gpuccio
All this is irrelevant and ridiculous. The simple truth is that the number of mutations necessary to cancel homology is, as said, about 1-3 per site, and that can be achieved with a number of total mutations a few times the length of the sequence.
I understand your argument and my initial take of Jock's argument was the same. Yes, with random change more variables are required but again a tiny fraction with the sequence space.
Now, I will ask a few very simple questions that even Entropy and Dazz should be able to answer:
This is what "tweaked" these guys. Entropy agreed that the study of genetic information is an important academic endeavor and thats what you are doing so I am ok with the progress so far. bill cole
gpuccio@112 Thanks for replying - I thought you just didn't see my comment@85. Many misunderstandings are due to poorly defined words that likely mean different things to different people. Like what is “complex”? Yes, "The content of design in unpredictable, because it depends on the desires and cognitive abilities of the designer", but the only way you can label something "designed" is to see that it is non-random, i.e. it follows certain rules - those imposed by the designer. In other words, design = regularity. Example: Paley's watch will have regular shapes and uniform materials that look different than a random pile of matter. You look at a sand dune or a sand garden – close-up it’s just “random” grains of sand, but wide-angle you see patterns that beg for an explanation. Can someone design a sand garden to look like a naturally occurring sand dune? Sure, and they’re indistinguishable (because they’re both designed if you ask me)! You say: “We must distinguish between usual randomness and quantum randomness.” – but this doesn’t make sense to me because “randomness” is ONLY a theoretical concept (like line, circle and point) – we can never determine something to be “random” – again, see: http://nonlin.org/random-abuse/ . Also, what we call “random” is never completely undetermined - all such phenomena have a deterministic element – at a minimum their statistical distribution and boundaries (no six face die will ever come up seven). You say: “An outcome that is non random is not necessarily designed.” How so? Provide example. If you think the sand dune is determined by “natural forces” and the “laws of physics”, then how do you know that it’s not ultimately designed? Nonlin.org
bill cole: I think he refers to my example with the 15 figures sequence. OK, I expected something like that, given the intellectual and moral level of the discussion there. I had said, rather clearly I believe: "For simplicity, I will make 15 substitutions in the 15 different sites. This is not a requisite, of course, but it makes the explanation easier. I will also go in order, always for the sake of clarity." The purpose was (and is) of course to show clearly that going from one sequence ot another, completely different, did not require at all traversing a great portion of the search space. Which should be evident to all. Of course, if we admit mutations with repetition (which would be the case with random mutations), some more are needed. DNA_Jock, who is apparently more interested in finding fault with me than in making some serious argument, says that the number is e^-n. I don't understand that, and I think he should explain better and give references. I think instead that the number of expected mutations to change all the sites can be computed by the Coupon collector's problem formula. See also here: https://en.wikipedia.org/wiki/Coupon_collector%27s_problem That would give 35 tries for my 15 figures sequence. For a 100 figures sequence it would give about 500 tries. So, for my example, the exploration of the search space would be: 35:10^15 = 3.5e-14 A big difference indeed. All this is irrelevant and ridiculous. The simple truth is that the number of mutations necessary to cancel homology is, as said, about 1-3 per site, and that can be achieved with a number of total mutations a few times the length of the sequence. Not "an enormous chunk of sequence space explored". In all cases, it's an infinitesimal chunk. gpuccio
Nonlin @134
Nonlin.org: Very funny. You even quote my passage, but missed or misunderstood: “the outcome does not tell us anything about the Randomness of this process”. Do you get it now?
Well, your claim doesn't hold up due to the law of large numbers. It seems that I have to spell it out for ya. Okay, let me quote Scordova again:
Scordova: As we examine sets of coins that are very large (say 10,000 coins), the outcome will tend to converge so close to 50% heads so frequently that we can say from a practical standpoint, the proportion will be 50% or close to 50% with every shaking of the set.
IOWs, Nonlin, given a large enough set, the outcome, contrary to your claim, does tell us something about the randomness of the process. Bottom line: if after 10.000 trials we do not have a result close to 50% heads, then we know that something is rigged — that heads and tails is not random. Origenes
Allan
Gpuccio, I am late to the party so I apologize if this has already been brought up. But first I want to commend you on a well written and thought out OP.
The analogy was nothing more then addressing the TSS fallacy. It was never meant to be an analogy for evolution. If you want to try to make it an evolutionary analogy, you can start with a wall of 20^20000 bricks. Best of luck to you :-) bill cole
gpuccio
Gpuccio was simulating AA substitutions and Jock changes the argument to nucleic acid (substitutions). A simple straw-man fallacy. Thoughts?
Jock answered me and he is claiming that there is no difference it is just straight combinatorial statistics. So it will take a long time to make the final substitution based on random mutation. In the end this is just a diversionary tactic on this part as you could easily define saturation as 99% changed. bill cole
Origenes@115 Why would you assume I am "not familiar with the law of large numbers (LLN)" when you didn't bother to understand the point I am making or to open the link provided to make sure you didn't misread? Very funny. You even quote my passage, but missed or misunderstood: "the outcome does not tell us anything about the Randomness of this process". Do you get it now? Nonlin.org
Gpuccio, I am late to the party so I apologize if this has already been brought up. But first I want to commend you on a well written and thought out OP. I want to read this in greater detail before I venture too far, but I would like to comment on the TSS argument, specifically the green bricks. If I am reading this correctly (no guarantee), you are still looking at the green bricks from a post-hoc perspective. i.e., the green bricks represent specific targets, even though the shooter didn't see them in advance. From this perspective, I agree that hitting these three green bricks by taking three blind shots would be highly improbable. However, let's add an additional one hundred green bricks (as per your description). Blindly taking three shots has a much higher probability of hitting any three green bricks than blindly hitting three pre-specified green bricks. This is a better analogy to how evolution is purported to proceed. We can look at these green bricks as all of the possible "targets" that could provide some advantage. Once hit, it is more likely to be preserved in the next generation of wall. And with every generation, three more blind shots are taken, and so on. How many generations will it take before all of the green bricks have been hit at some time throughout the generations? What I am trying to say is that three bricks do not have to be hit all at the same time. The odds improve even more if the wall has multiple offspring per generation. Obviously this is absurdly over-simplified. For example, as the conditions change (environmental, competition, etc.) the number and location of these green bricks on the wall will change. But it has had the benefit of getting the image of copulating brick walls into your heads. :) Allan Keith
gpuccio origenes
Okay, given what he’s trying to explain, that’s a truly horrendous error. Here’s a tip, folks. At one mutation per site, 36% of the sites will be unchanged. At three mutations per site, 5% of the sites will be unchanged. Ironically, this was another thing I tried to explain to him in 2014. It’s e^-n And he’s lecturing us on combinatorics. ROFL
This is Jock trying to create unnecessary confusion and to gain the intellectual high ground. Another logical fallacy. Whats amazing is he thinks he can now get away with it Gpuccio was simulating AA substitutions and Jock changes the argument to nucleic acid (substitutions). A simple straw-man fallacy. Thoughts? bill cole
gpuccio
I appreciate your patience. But if he doesn’t understand the basics of combinatorics, why is he even discussing these issues? Isn’t that mere arrogance?
All these guys assume that ID people are stupid as the smear complain has tried to paint this image in order to slow the momentum. IDiots is their slogan for ID guys. He has not thought through the problem as he did not believe it existed. If your perspective is philosophical atheism you assume the simple to complex model of evolution to be true and don't take counter argument seriously. The interesting point is if enough evidence that contradicts the philosophy can change the philosophy. bill cole
bill cole: "Entropy is a smart guy but he has not really thought through this very difficult mathematical problem. It looks like he is trying to understand it and that is the first step." I appreciate your patience. But if he doesn't understand the basics of combinatorics, why is he even discussing these issues? Isn't that mere arrogance? gpuccio
Origenes: Of course, they don't know what they are talking about. Saturation at synonymous sites means that each site has been exposed to mutations so that no homology can anymore be detected bewteen the sequences of synonimous sites. At that point, we cannot any more distinguish between a divergence of 400 miliion years or a divergence of 2 billion years. Of course each site in a protein sequence is exposed to mutation after, say, 400 million years. The rate of mutations per site is something between 1 and 3, in most cases. The only reason that functional sites do not change is that they are functionally constrained, and therefore negative selection keeps them by eliminating variation. IOWs, mutations do happen at those sites too, but they are eliminated, and are practically never fixed. What is really sad is that both Entropy and dazz demonstrate, once more, that they do not understand the basics of the issue. Entropy says: "So, if synonymous substitutions have reached saturation, that means that an enormous chunk of sequence space was “explored.”" And dazz immediately echoes: "If that was true, wouldn’t that mean the “probabilistic resources” would be enough to traverse the entire search space and produce tons of “bits of information.”" I am almost reluctant to point out their blatant error, so obvious is it! However, it seems that I have to do exactly that. Let's try with a simple example: 372944389420147 This is a 15 figures sequence in base 10. Now, let's say that the sequence is completely neutral, without any functional constraint. And let's say that I can operate one substitution per site per minute. For simplicity, I will make 15 substitution in the 15 different sites. This is not a requisite, of course, but it makes the explanation easier. I will also go in order, always for the sake of clarity. So we get, in 15 minutes, the following results (the mutation is in bold): 1) 572944389420147 2) 522944389420147 3) 528944389420147 4) 528644389420147 5) 528674389420147 6) 528672389420147 7) 528672989420147 8) 528672959420147 9) 528672950420147 10)528672950820147 11)528672950840147 12)528672950845147 13)528672950845847 14)528672950845817 15)528672950845812 The original sequence and the final sequence have completely diverged. No homology is any more detectable. We have saturation: 372944389420147 528672950845812 Now, I will ask a few very simple questions that even Entropy and dazz should be able to answer: a) How many different states have we reached? (Answer: 15) b) How many different states exist in the search space? (Answer: 10^15) c) What "enormous chunk of sequence space" have we explored? Answer: 15:10^15 = 1.5e-14 I think that no further comments are needed. gpuccio
Origines
Dazz: If that was true, wouldn’t that mean the “probabilistic resources” would be enough to traverse the entire search space and produce tons of “bits of information.”
This is a very faulty assumption that Dazz and Entropy are making. Your arguments were very solid and I am impressed. If you read gpuccio's last post he takes you through the arguments. Evolution cannot traverse even a minuscule fraction of the search space of a single protein. As GP mentioned in the above argument maybe 10^50 searches. The search space of a single ATP synthase protein is greater the 10^500. My experience here is that even very smart people have trouble seeing how large this number really is and that is the point of gpuccios last post. We have to be very patient here as it will take time to work through this. You are doing a very good job of arguing at TSZ and keeping the emotions under control. Entropy is a smart guy but he has not really thought through this very difficult mathematical problem. It looks like he is trying to understand it and that is the first step. bill cole
uncommon_alves:
There are no predefined targets in evolution. There is no plan to “shoot the green brick”
That just exacerbates the problem. It doesn't help. You expect us to believe- without evidence or a means to test the claim- that irreducibly complex functioning protein complexes just happened due to some differential accumulations of genetic accidents? Really? ET
uncommon_alves
Dear all who have responded to me, I find that the entire argument for ID is based on your personal incredulity
Do you understand the biochemical mechanisms of slime mold? Where did the genetic information come from for slime mold to navigate the maze? Is your answer random variation plus natural selection? A human fertilized egg can divide from a single cell and eventually become a human being. This is maybe more impressive then slime mold but like slime mold requires genetic information that we can count in bits. Like slime mold it does not start out with a brain but builds one from cell division alone. The only cause of information we know of is design. Can you come up with another cause other then s-happens? What side do you really think is pseudo science? bill cole
GPuccio Can you help me out here? At TSZ I have written a post about your reasoning concerning functionality of certain proteins. I quote you saying:
GP: The reason why I stick usually to the vertebrate transition is very simple: it is much older. There, we have 400+ million years. With mammals, much less. Maybe 100 – 130 million years. Which is not a short time, certainly. But 400 is better. 400 million years guarantees complete and full exposure to neutral variation. That can be easily seen when Ka/Ks ratios are computed. The Ks ratio reaches what is called “saturation” after 400 million years: IOWs, any initial homology between synonymous sites is completely undetectable after that time. That means that what is conserved after that time is certainly conserved because of functional constraint. While 100 million years are certainly a lot of time for neutral variation to occur, still it is likely that part of the homology we observe can be attributed to passive conservation. IOWs, let’s say that we have 95% identity between humans and mouse, for some protein. Maybe some of that homology is simply due to the fact that the split was 80 million years ago: IOWs, some AA positions could be neutral, but still be the same only because there was not enough time to change them. Of course, the bulk of conserved information will still be functionally constrained, but probably not all of it.
The response I got from Entropy and Dazz is rather surprising. Both are extraordinarily impressed by the saturation that stems from all those mutations and what it tells us about the power of evolution …
Entropy: In order to guarantee full exposure to neutral variation, there must be an enormous amount of mutations. Remember that mutations are mostly random. Thus, in order to touch every site, we need a number of mutations well above the length of the sequences. This is where saturation of synonymous sites comes into play, they’d show that there’s been quite a number of mutations. So, if synonymous substitutions have reached saturation, that means that an enormous chunk of sequence space was “explored.” If we were to accept gpuccio’s assumptions, the conclusion would be that evolution performs more-than-enough exploration of sequence space to explain “jumps” in “functional” information (or whatever wording gpuccio might like today).
I did push back a little, but to no avail …
Dazz: If that was true, wouldn’t that mean the “probabilistic resources” would be enough to traverse the entire search space and produce tons of “bits of information.”
Entropy (on Dazz): Somebody seem to be getting it!
Origenes
uncommon_alves, If what you say had any merit you should be able to easily refute our claims. Just how do slime molds help the case for blind and mindless processes? ET
Dear all who have responded to me, I find that the entire argument for ID is based on your personal incredulity . It isn't difficult to attain complex system 'bits'. Just look at how slime mold with no brains can take its own 'decision', navigate complex maze etc. Nature Slime Mold I think a lot of pseudo science has confused ID supporters. uncommon_avles
To all: Now, I would like to make a few mathematical considerations about the argument, emerged many times from the other side, that the observation (however rare) of more than one independent complex solution for a function (or, maybe, for related functions) is an argument in favour of neo-darwinism. I will try to show that the opposite is true. Those who don't love numbers, or who have problems with exponential measures, should probably avoid to read this comment, because it's absolutely about numbers, and big numbers. The important premise is: There are two key factors in evaluating the probability of a specific functional outcome (like an observed functional protein) in a system (like the biological system) by RV. 1) The first factor, as we know very well, is the functional information in the sequence. That measure already describes two of the important concepts: the size of the functional island and the size of the search space. Indeed, it corresponds to the rate between the two. 2) The second factor, often underemphasized, are the probabilistic resources of the sytsem. They can be simply defined as the number of different states that can be tested by the system, by RV, in the allotted time. Neo-darwinists have tried to convince us, for decades, that the probabilistic resources of our biological planet are almost infinite, guven the bif times and so on. But that's not true. The probabilistic resources of our biological planet are not small, but they are certainly not huge, least of all almost infinite. They are, indeed, very finite, and well computable, at least approximately. In my OP: What are the limits of Random Variation? A simple evaluation of the probabilistic resources of our biological world https://uncommondescent.com/intelligent-design/what-are-the-limits-of-random-variation-a-simple-evaluation-of-the-probabilistic-resources-of-our-biological-world/ I have offered, in the first table, such a computation. The numbers there are not realistic at all: they have been computed as a higher threshold, making all possible assumptions in favour of the neo-darwinian scenario. They certainly overestimate the real resources, and by far. However, I will stick to those mubers, that I have offered myself. Now, while the functional information in the protein is certainly the key factor, the second important point is: the rate between that functional information and the probabilistic reasources of the system. The bigger the difference between the functional information and the probabilistic resources, the more the improbabiltiy of what we observe increases. Exponentially. So, let's reason with some numbers. I will consider some protein A, whose functional information is 500 bits. Our usual generic threshold for complexity in functional information. That means that, whatever the size of the target space (the functional island) or of the search space (the ocaen), their ratio is 2^-500. OK, now let's go to the probabilistic resources. We will consider the most favorable scenario for noe-darwinism: the bacterial system. My (extremely generous) estimate for the number of states that can be reached by the bacterial system by RV in the whole life of our planet is 2^139 (139 bits). It's 138.6 in my table, but I will round it. There is no need here to add the 5 sigma additional bits, because we will explicitly compute probabilities in our example. So, our scenario is: Functional information in the protein: 500 bits (3e-151) Probabilistic resources: 139 bits (7e+41) Independent complex solutions observed for that function: 1 Probability of finding the observed solution in one attempt: 3e-151 Probability of finding the observed solution using all the available attemps (7e+41): 2.1e-109 This probability is computed using the binomial distribution: it is the probability of getting at least one success with that probability of success and that number of attempts. OK, that's not good fo neo-darwinism, of course. One argument frequently raised by them as discussed in th OP, is that there could be many independent solutions in the protein space for that function. And that is probably true, in many cases. They should, however, be of comparable complexity, because if they were much simpler, we would expect to find the simpler solutions in the proteome, and not the complex ones. But what if we observe two independent complex solutions for the same function? Our interlocutors argue that this is an argument in favor of neo-darwinism, because it demonstrates that it is not so difficult to find complex solutions. I say, instead, that observing two, or mote, independent complex solutions for the same function is a stringent argument in favour of design. Let's see why. Now our scenario is similar to the previous one. The only difference is that we observe not one, but two independen complex solutions. Let's say each one of them of 500 bits of FC, but different. For the moment, let's assume that those two solutions are the only ones that exist in the search space. Then the scenario now is: Functional information in each protein: 500 bits (3e-151) Functional information in the two islands summed: 499 bits (6e-151) Probabilistic resources: 139 bits (7e+41) Independent complex solutions observed for that function: 2 Probability of finding at least two solutions using all the available attemps (7e+41): 8.82e-218 So, we had a probability of 2.1e-109 for one observed solution, and we have now a probability of 8.82e-218 for two observed solutions! A probability that is more than 100 orders of magnitude smaller! But, of course, neo-darwinists will say that the independent complex solutions are many more than two. OK, always using the binomial distribution, how many indepepndent complex solutions of 500 bits of FI each do we nedd to have, with two observed solutions, the same probabilities as we had for one observed solution assuming that it was the only one? The answer is: about 10^54 independent complex solutions! Not to have a good probability: just to have the same probability computed for one observed solution if it were the only one: about 2.1e-109! That's all, for the moment. Enough numbers for everyone! :) gpuccio
BA: Good point! :) gpuccio
Question: If a 12-year-old kid solving three Rubik's Cubes while juggling is to certainly be considered an impressive feat of 'hitting a predetermined target' (i.e. of Intelligent Design),,,
12-year-old kid solves three Rubik's Cubes while juggling - video https://www.liveleak.com/view?t=kfCSB_1523555110
,,, then why is not simultaneouly solving hundreds, (if not thousands), of 'protein folding Rubik's cubes' in each of the trillions of the cells of each of our bodies not also to certainly be considered an impressive feat of 'hitting a predetermined target' (i.e. of Intelligent Design)???
Rubik's Cube Is a Hand-Sized Illustration of Intelligent Design – Dec. 2, 2014 Excerpt: The world record (for solving a Rubik's cube) is now 4.904 seconds,,, You need a search algorithm (for solving a Rubik's cube).,,, (Randomly) Trying all 43 x 10^18 (43 quintillion) combinations (of a Rubik's cube) at 1 per second would take 1.3 trillion years. The robot would have a 50-50 chance of getting the solution in half that time, but it would already vastly exceed the time available (about forty times the age of the universe).,,, How fast can an intelligent cause solve it? 4.904 seconds. That's the power of intelligent causes over unguided causes.,,, The Rubik's cube is simple compared to a protein. Imagine solving a cube with 20 colors and 100 sides. Then imagine solving hundreds of different such cubes, each with its own solution, simultaneously in the same place at the same time (in nanoseconds). (That is exactly what is happening in each of the trillions of cells of your body as you read this right now). http://www.evolutionnews.org/2015/12/rubiks_cube_is101311.html The Humpty-Dumpty Effect: A Revolutionary Paper with Far-Reaching Implications - Paul Nelson - October 23, 2012 Excerpt: Put simply, the Levinthal paradox states that when one calculates the number of possible topological (rotational) configurations for the amino acids in even a small (say, 100 residue) unfolded protein, random search could never find the final folded conformation of that same protein during the lifetime of the physical universe. http://www.evolutionnews.org/2012/10/a_revolutionary065521.html Physicists Discover Quantum Law of Protein Folding – February 22, 2011 Quantum mechanics finally explains why protein folding depends on temperature in such a strange way. Excerpt: First, a little background on protein folding. Proteins are long chains of amino acids that become biologically active only when they fold into specific, highly complex shapes. The puzzle is how proteins do this so quickly when they have so many possible configurations to choose from. To put this in perspective, a relatively small protein of only 100 amino acids can take some 10^100 different configurations. If it tried these shapes at the rate of 100 billion a second, it would take longer than the age of the universe to find the correct one. Just how these molecules do the job in nanoseconds, nobody knows.,,, http://www.technologyreview.com/view/423087/physicists-discover-quantum-law-of-protein/
bornagain77
bill cole: "Just thinking out load here. When we assign probability it is to assign the chance that the cause identified is the actual cause". Just to be precise, that's not exactly how hypothesis testing works. We observe some effect, that deserves an explanation. So, we express a theory about the possibl explanation of that effect. We call that theory H1. But we ask oursleves if the effect could just be the result of random noise in the data. We call this hypothesis the null hypothesis, H0. We try to model in the most appropriate way the random noise, so the we can compute: The probability of observing that effect, or a stronger one (IOWs the upper tail), if we assume that H0 is true. That probability is the p value for our hypothesis testing. If it is really low, we reject the null hypothesis. Does that mean that our H1 hypothesis is correct? Not necessarily. There could be some other explanation, let's say H2, for the observed effect. However, the null hypothesis (a random cause for the observed effect) is anyway rejected. The choice between H1 and H2 is made considering their explanatory merits, but it is not merely probabilistic. So, in our case, the observed effect is the function. We reject the null hypothesis that the explanation for the function is RV. And our H1 is design, because design has the correct explanatory power. Neo-darwinism proposes an algorithm based on RV + NS. But once RV has been rejected as a possible cause for any complex function, NS is powerless for the reasons many times debated. NS can only optimize, in a very limited measure, an already existing function. It has no role in finding complex functions, nor can it optimize them is they have not been found by RV. Therefore, NS is easily falsifiable as H2. Design is and remains the best explanation, indeed the only one available. gpuccio
bill cole: The issue is very simple. The wildtype solution is 2000 times more efficient than the one that was found in the experiment. The solution found in the experiment was of course easy to find. It was a big hole, and that explains why it is easy to find, even with a small starting library. The wildtype is 2000 times more efficient and hugely smaller (more specific). According to the authors, 10^70 starting sequences would be necessary to find it. Of course NS has not special targets, but if it found the wildtype instead of the easy, gross solution, it is certainly very lucky. I have already made this point: is NS has no targets, how is it that so many sophisticated targets were found? Indeed, almost exclusively sophisticated and finely crafted targets. Where are the easy solutions, the sequences that can do things just by a few AAs specificity? How is it that we are surrounded almost exclusively by proteins with specificities in the range of hundreds and thousands of bits? Hundreds of specific and conserved AAs? He says: "I say there’s plenty of solutions" Well, there are certainly those that we observe. And other, probably. "and we just happen to know of the ones that prevailed" or just those that were found or designed "and that we have sequenced" well, we have sequenced quite a lot of them, now. A very good sample, certainly representative of the general thing. But of course there is still much to do. gpuccio
gpuccio Here is a discussion with Entropy. I would like you thoughts. colewd: This statement means that random change did not find the wild type and in addition the sequence was very different.This is very significant to his position and does not require your straw-man to make it valid.
your making my point without realizing it, which means that you’re not understanding what I wrote. The only way in which that part would be very significant for his position is if he thinks that only finding the wild type sequence will do, which is precisely the problem I mentioned and you called a straw-man. He thinks that the only solution is the one that has been sequenced from wild type phage. I say there’s plenty of solutions and we just happen to know of the ones that prevailed and that we have sequenced.
bill cole
Bill Cole @114
Bill Cole: When we see a 500 bit functional sequence outside biology the chance it is designed is 100%. So why an exception for biology?
Because they desperately want there to be no God.
Thomas Nagel: I speak from experience, being strongly subject to this fear myself: I want atheism to be true and am made uneasy by the fact that some of the most intelligent and well-informed people I know are religious believers. It isn’t just that I don’t believe in God and, naturally, hope that I’m right in my belief. It’s that I hope there is no God! I don’t want there to be a God; I don’t want the universe to be like that.
Origenes
Nonlin @
Nonlin: You toss a coin and it always comes up Heads. Does that mean the coin is loaded? What does any other sequence of Heads and Tails tell us? When can we be certain that an outcome is random? In fact, we can never tell from the results whether an outcome is random or not because any particular sequence of outcomes has an equal probability of occurrence. If a coin is fair, 10 Heads in a row has a probability of about 1 in 1,000, but so does HTHTHTHTHT or HHHHHTTTTT or any other sequence of 10 tosses. We can get suspicious and investigate by other means whether the coin is loaded or not, but absent those other findings, the outcome does not tell us anything about the Randomness of this process. — Source: http://nonlin.org/random-abuse/
Nonlin, I take it that you are not familiar with the law of large numbers (LLN). Allow me to give you some pointers:
Scordova: It is the law that tells us systems will tend toward disorganization rather than organization. It is the law of math that makes the 2nd law of thermodynamics a law of physics. Few notions in math are accorded the status of law. We have the fundamental theorem of calculus, the fundamental theorem of algebra, and the fundamental theorem of arithmetic — but the law of large numbers is not just a theorem, it is promoted to the status of law, almost as if to emphasize its fundamental importance to reality.
Wikipedia: the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.
Scordova: As we examine sets of coins that are very large (say 10,000 coins), the outcome will tend to converge so close to 50% heads so frequently that we can say from a practical standpoint, the proportion will be 50% or close to 50% with every shaking of the set.
- - - - More reading here. Origenes
Origenes gpuccio
Did no one tell them that intelligent design is not a random, but, instead, teleological process? There is no way to calculate the probability of intelligent design. Here is a meaningless question: what is the probability of Leonardo Da Vinci painting the Mona Lisa? That is, it is a meaningless question unless one proposes a random mechanism which creates paintings. What is the probability that this post contains an argument against a post made by Rumraket at TSZ? Nonsense question, unless one proposes that this post is created by e.g. a monkey banging away on a typewriter producing forum posts — in which case the probability is very low. I do hope no one seriously considers this to be a possibility ????
Just thinking out load here. When we assign probability it is to assign the chance that the cause identified is the actual cause. When we see a functional sequence, what is the chance that it was generated by random change. If it is 500 bits our chance is zero. When we see a 500 bit functional sequence outside biology the chance it is designed is 100%. So why an exception for biology? bill cole
Origenes: This new argument from Rumracket seems to be an appeal to the old attempt to discredit hypothesis testing in the name of the holy Bayesian truth! :) Mark Frank was very good at that, asking for the priors of design and non design hypotheses. Rumracket is digging for old evergreens with great zeal. My simple objection is very similar to yours: deciding if the existence of a biological designer is credible is not a matter of priors and probabilities: it is just a question of general worldviews about reality. The Bayesian argument, at least in this context, is just a way to camouflage philosophy as a probabilistic argument. Most current science goes on mainly by hypothesis testing, and rejecting null hypotheses. I happily go on with that, too. :) gpuccio
Nonlin.org at #85: I am afraid that I don't really understand what you think, and your arguments. I suppose that we use some basic concepts in a very different way. I will just answer what I think I understand:
Just “in a sense”? Assuming you don’t see the creator in action, how would you know something was designed other than by observing it follows certain laws? What do you mean: “intentional part is certainly more unpredictable”?
My point is that design is not a law based on regularities. The only regularity is that complex functional indformation points to a designer. And even that is an inference, not a law. The content of design in unpredictable, because it depends on the desires and cognitive abilities of the designer. No laws here, too.
I would say there is no pure randomness in nature – there’s always a mix of random and nonrandom (law/design) – say you have a black box that emits radioactive decay particles – if you observe these outputs long enough you can be quite certain (probabilistic) about the element inside the box – that is the deterministic component of that experiment.
Great confusion here! We must distinguish between usul randomness and quantum randomness. You cannot conflate the two. Usual randomness just means that there is some system whose evolution is completely deterministic, but we can't really describe its evolution in terms of necessity, because there are too many variables, or we simply don't know everything that is implied. In some cases, such a system can be descrbed with some success using an appropriate probability function. Probability functio0ns are well defined mathematical objects, which can be useful in describing some real systems. A probabilistic description is certainly less precise then a necessity description, but when the second is not available, the first is the best we can do. A lot of empirical science uses succesfully probabilistic tools. So, we describe deterministic systems in terms of necessity or probability according to what we can do. The configurations a the molecules in a gas are better studied by probabilistic tool, they cannot be computed in detail. But they remain deterministic just the same. Quantum probability is all another matter, and a very controversial issue. It could be intrinsic probability, but not all agree. Radioactive decay is a quantum event, and therefore has the properties of quantum probability. What is probabilistic is the time of the decay event.
Because of the inherent mix of randomness and design, you don’t need to prove every single little thing is designed – just showing an element of design makes the whole thing designed. But again, we do not observe the designer at work – we only see the laws followed by the biologic systems.
I really don't understand here. I will just say that the "laws followed by the biologic systems" are the general laws of biochemistry. But the configuration that allows to obtain specific results by those laws is the functional information, and points to design.
A regular six-face dice has zero probability of an outcome higher than six because the system has been so designed
Many events have zero probabilities in systems that are not designed. And random events do happen in designed objects, without having any special connection with the design. For example, random mutations do happen in genomes, but that has nothing to do with the design in the genome.
It doesn’t matter if the determinism comes before the random event (as discussed) or after as in a twelve-face dice that is rolled again by an agent that seeks only one to six outcomes.
???
In biology, if non-random “natural selection” acts upon “random mutations”, then the outcome is non-random – hence designed – as we observe in nature.
Not in my world and not according my ideas and use of words. An outcome that is non random is not necessarily designed. A designed object is only an object whose configuration has been represented in the consciousness of a conscious intellugent and purposeful agent, before being outputted to the object. See my OP: Defining Design https://uncommondescent.com/intelligent-design/defining-design/
There should be no doubt that Someone designs and builds the dice, Someone designs and builds the random generator (not easy), and Someone designs and runs the whole experiment.
There is no doubt that I don't understand what you are saying.
Example? Based on outcome, randomness is impossible to determine (it’s simple math) http://nonlin.org/random-abuse/ . On the other hand, even a 10-bit sequence has a mere o.1% probability, so probabilities get extreme very quickly. Probabilities of randomness in biology are ridiculously low, therefore not even worth seriously discussing.
Well, for me they are worth of a very serious discussion. Exactly because they "get extreme very quickly". But of course there is no need for you to join the discussion. gpuccio
Rumraket at TSZ @
Mung: Why don’t people just get brutally honest with gpuccio and let him know that no matter how improbable something may be, that just doesn’t matter.
Rumraket: It does matter, but the problem is Gpuccio is leaving out the probability of design. A design hypothesis has to explain why X happened as opposed to Y, and give a probability of X on design. That whole thing is simply skipped by design proponents. The calculation is attempted for a “blind material process”(and for some reason it’s always assumed to be like a tornado in a junk yard), but no calculation is attempted for design.
We see this argument in various forms at the TKZ. Paraphrasing:
The existence of computers cannot be explained by chance? Well, computers can also not be explained by intelligent design! Take that UD! Tit for tat.
Did no one tell them that intelligent design is not a random, but, instead, teleological process? There is no way to calculate the probability of intelligent design. Here is a meaningless question: what is the probability of Leonardo Da Vinci painting the Mona Lisa? That is, it is a meaningless question unless one proposes a random mechanism which creates paintings. What is the probability that this post contains an argument against a post made by Rumraket at TSZ? Nonsense question, unless one proposes that this post is created by e.g. a monkey banging away on a typewriter producing forum posts — in which case the probability is very low. I do hope no one seriously considers this to be a possibility :) Origenes
uncommon_avles There is no plan to “shoot the green brick” So that it is just the green bricks that get shot is mere coincidence despite the extreme improbability? Then why does ID bring in imaginary explanations in biological processes? What ID does is make observations and suggest an explanation for it. Nothing imaginary about it. tribune7
GPuccio @97 Thank you for clarifying your use of terms. For the likes of DNA-Jock it would be helpful if we have one simple round easy-to-remember word for a “contingent post-definition” and another for a “non-contingent post-definition.” I fear that my proposal “specification based on the outcome” and “independent specification” only made matters worse for them. Poor bastards :) Origenes
OLV at #103: It is factual. There goes my career as a fiction writer. gpuccio
uncommon_avles (and others): Just a brief note about the "no predefined targets in evolution" argument, another masterpiece of neo-darwinian thought. The idea is very simple: Of course there are no predefined targets in the neo-darwinian theory, except maybe reproductive advantage. But there are a lot of targets in the biological world. Targets that have been found, not targets that have to be found. Those targets are real, and neo-darwinian theory has the difficult (indeed, impossible) task of explaining how is it possible that so many extremely sophisticated functional targets have been found by a supposed mechanism that has no targets at all. gpuccio
bill cole: A quick summary of the extreme resources used by our neo.darwinist friends: - Rumracket tries the “infamous deck of cards fallacy”: unlikely events happen all the time, statistics is completely useless. - DNA_Jock sticks to the tSS fallacy: our statistics is a fallacy. - uncommon_avles tries some very trivial repetition of the essentials of scientism, reductionism, materialism and naturalism, all together, reciting them as evidence of his blind faith and adding some bold, just in case. Not so exciting... By the way, any news from Joe Felsestein about functional complexity? That would be probably more interesting. :) gpuccio
uncommon_avles @102
UA: There are no predefined targets in evolution. There is no plan to “shoot the green brick”
Support your claim. Show how it can all come about without a plan.
UA: Facts dictate science so a new discovery tomorrow might explain why a seemingly difficult biological process is not difficult at all.
Until that day of great discovery, the best scientific explanation is intelligent design.
UA: On the other hand, an ‘external agent’/ Intelligent entity capable of controlling entire universe has neither been discovered nor theorized ( as against imagined). It is not supported by any known fact.
Contrary to your claim there is an abundance of fine-tuning arguments. Why do you think that your side imagines a multiverse? Just for the heck of it?
UA: Then why does ID bring in imaginary explanations in biological processes?
Intelligent design is not an ‘imaginary explanation’. How else do you explain the existence of the computer you see in front of you? Origenes
uncommon_avles: On the other hand, an ‘external agent’/ Intelligent entity capable of controlling entire universe has neither been discovered nor theorized ( as against imagined). It is not supported by any known fact.
And yet, contrary to your claim, here we are discussing supportive evidence for an intelligent designer. Origenes
(102) Is the confirmed detection of irreducible functional complexity in biological systems that gpuccio has referred to in numerous occasions factual or imaginary? If that’s imaginary, then gpuccio should seriously consider pursuing a very successful career as a fiction writer. His bestselling stories will fill the bookstores everywhere. OLV
gpuccio @ 93 Because the targets are real targets, therefore no TSS apllies. We can compute the probability of finding real targets in a real random system. That’s not TSS. There are no predefined targets in evolution. There is no plan to "shoot the green brick" This is simply wrong. Science is made by reasoning about what we know, the facts, and not about “things that have yet to be discovered”, and that are not supported by any known fact. Science progresses as we discover new facts. We change from Geocentric to heliocentric, from planetary model of atom to probabilistic model. Facts dictate science so a new discovery tomorrow might explain why a seemingly difficult biological process is not difficult at all. On the other hand, an 'external agent'/ Intelligent entity capable of controlling entire universe has neither been discovered nor theorized ( as against imagined). It is not supported by any known fact. But he does not consider other “possible reasons” like past sins or whatever, because there is no fact that suggest that those imaginary explanations have any merit in this case. Then why does ID bring in imaginary explanations in biological processes? uncommon_avles
bill cole: "Thinking about proteins as only single enzymes is a fallacy. The key here is the alpha and beta chains must interact to function correctly. There is no hill to climb. If they fail to interact the animal dies." It's much worse than that (for neo-darwinists, of course, or in a sense for the animal too :) ). The alpha and beta chain must interact finely one with the other to buind the final functional unit of the F1 subunit: the hexamer which, indeed, binds ADP and phosphate and generates ATP. IOWs, the catalytic machine. But that's not enough. The alpha and beta hexamer must undergo a series of conformational changes, which are the essence of its catalytic function, because it's those changes that provide the necessary energy that will be "frozen" in the high energy molecule of ATP. Those changes are generated by the rotor, essentially the stalk (the gamma chain) linked to the c-ring, and the c-ring rotates because of the energy derived form the proton gradient (the "water" in the mill). But, of course, the alpha-beta hexamer must be anchored, so that it does not rotate together with the stalk (the gamma chain), but is instead deformed by its rotation, undergoing the needed conformational changes. So, the hexamer must be "anchored" to the F0 subunit, and that is implemented by the "peripheral stalk", the a and b chains. So, our two chain (alpha and beta) must not only interact finely one with the other so that they can build the complex structure that can undergo the three conformational changes necessary for the catalysis; they must also, in their hexameric form, interact correctly with the stalk (the gamma chain), and with the peripheral stalk (the b chains). This is of course a very sophisticated plan for a very sophisticated machine. That explains the high functional specificity of our two sequences. This is from the Wikipedia page:
Binding model Mechanism of ATP synthase. ADP and Pi (pink) shown being combined into ATP (red), and the rotating ? (gamma) subunit in black causing conformation. Depiction of ATP synthase using the chemiosmotic proton gradient to power ATP synthesis through oxidative phosphorylation. In the 1960s through the 1970s, Paul Boyer, a UCLA Professor, developed the binding change, or flip-flop, mechanism theory, which postulated that ATP synthesis is dependent on a conformational change in ATP synthase generated by rotation of the gamma subunit. The research group of John E. Walker, then at the MRC Laboratory of Molecular Biology in Cambridge, crystallized the F1 catalytic-domain of ATP synthase. The structure, at the time the largest asymmetric protein structure known, indicated that Boyer's rotary-catalysis model was, in essence, correct. For elucidating this, Boyer and Walker shared half of the 1997 Nobel Prize in Chemistry. The crystal structure of the F1 showed alternating alpha and beta subunits (3 of each), arranged like segments of an orange around a rotating asymmetrical gamma subunit. According to the current model of ATP synthesis (known as the alternating catalytic model), the transmembrane potential created by (H+) proton cations supplied by the electron transport chain, drives the (H+) proton cations from the intermembrane space through the membrane via the FO region of ATP synthase. A portion of the FO (the ring of c-subunits) rotates as the protons pass through the membrane. The c-ring is tightly attached to the asymmetric central stalk (consisting primarily of the gamma subunit), causing it to rotate within the alpha3beta3 of F1 causing the 3 catalytic nucleotide binding sites to go through a series of conformational changes that lead to ATP synthesis. The major F1 subunits are prevented from rotating in sympathy with the central stalk rotor by a peripheral stalk that joins the alpha3beta3 to the non-rotating portion of FO. The structure of the intact ATP synthase is currently known at low-resolution from electron cryo-microscopy (cryo-EM) studies of the complex. The cryo-EM model of ATP synthase suggests that the peripheral stalk is a flexible structure that wraps around the complex as it joins F1 to FO. Under the right conditions, the enzyme reaction can also be carried out in reverse, with ATP hydrolysis driving proton pumping across the membrane. The binding change mechanism involves the active site of a ? subunit's cycling between three states.[11] In the "loose" state, ADP and phosphate enter the active site; in the adjacent diagram, this is shown in pink. The enzyme then undergoes a change in shape and forces these molecules together, with the active site in the resulting "tight" state (shown in red) binding the newly produced ATP molecule with very high affinity. Finally, the active site cycles back to the open state (orange), releasing ATP and binding more ADP and phosphate, ready for the next cycle of ATP production.[12]
And this is from the PDB "Molecule of the Month" page:
ATP synthase is one of the wonders of the molecular world.
And, of course, in the real world wonders are not cheap things. Highly refined functional information is the price for this wonder! :) gpuccio
bill cole: "The evolutionists have created a “just so” story to try and save the concept that proteins can evolve. I don’t think real biology supports that story." It doesn't. gpuccio
bill cole: "How confident are you that your bit calculation is equivalent to the probability of forming ATP synthase by random chance?" Very confident indeed! :) Of course, it is an indirect and approximate measure, so "equivalent" does not mean "exactly the same thing". But it is a very good way of measuring functional information indirectly, given that we cannot realistically measure it directly because of obvious combinatorial limitations. gpuccio
gpuccio
Of course, Rumracket can describe all kinds of imaginary landscapes: who can contradict pure imagination? He is writing fairy tales, and realism has never been the best inspiration for that kind of things.
How do think Rum's argument would relate to U2? Some proteins amino acid change will stop it from binding to other proteins. In this case the hill climbing is irrelevant. Does it bind or doesn't it? If it doesn't bind then the function fails. There is no natural selection here only survival or death. Thinking about proteins as only single enzymes is a fallacy. The key here is the alpha and beta chains must interact to function correctly. There is no hill to climb. If they fail to interact the animal dies. The evolutionists have created a "just so" story to try and save the concept that proteins can evolve. I don't think real biology supports that story. The Hayashi paper supports your hypothesis but it is not a real simulation of evolution as it only simulates single cell organisms and simple enzyme reactions. ATP synthase is a very different story as it involves 13 proteins that must bind together to support a single function. It produces ATP in production which is mission critical for life. Is life possible without ATP synthase? If not this is an original sequence. There is no natural selection event that can help build an original sequence. Natural selection requires cell division to initiate it. Your right, if there is no grounding in science an evolutionists can make anything true by pure speculation. How confident are you that your bit calculation is equivalent to the probability of forming ATP synthase by random chance? bill cole
Origenes: So, I thought that was clear in what I have written. I must say that I don't use the word "definition" in that sense. That is probably DNA_Jock's equivocation. I think that I have used the terms definition and specification as synonims, the only difference being that we first give a definition and then use it as a specification. I always say that we have to recognize and define the function, and that the explicit definition of the function, including its observed level, becomes the specification to measure functional complexity for that function. So, in a sense, a specification is only a definition used to measure functional information (IOWs, to generate a binary partition in the search space). So, when I have said that some type of definitions cannot be used without incurring in the TSS fallacy, I have clarified that I meant definitions that re-used the contingent information derived from the event: for example, the specific sequence of AAs. If you want, we can call that "a contingent post-definition", for clarity. So, for me, all definitions are definitions: i see no reason to accept DNA-Jock's equivocal terms. Some definitions are good specifications: a) All pre-definitions, either they are conceptual or contingent b) Post-definitions, only if they respect the two requirements I have given for a valid post-specification: b1) they must be based on some objective property of the system b2) they must not use in any way the contingent information in the outcome to build the definition Other definitions are not good specifications, and invariably generate a TSS fallacy: c) All post-definitions which are based on the contingent information in the outcome. I hope that's clear (for you, I don't think it wil ever be clear for DNA_Jock). gpuccio
bill cole: So, after the infamous deck of cards fallacy, Rumracket has now taken the old path of "all could be possible". Not a new choice, that too. But the problem for him is that what is under extremely strong purifying selection is a whole complex sequence, not one aminoacid. IOWs, the sequences of the alpha and beta chains of the F1 subunit of ATP syntase are finely crafted, to realize a functional unit which is very much similar to a highly specialized watch. And, while some basic structure is shared with other types of ATPases, which, I believe, is Rumracket's argument, the fine definition of the two sequences, that implies 500+ additional bits for the beta sequence and a little less for the alpha, almost 1000 additional bits for the whole structure, is specific of those two chains. That is the functional island of which I am talking, not the 100 bits that are shared between many ATPases of different function and context. That functional island is very specific, as shown by the extremely high conservation of most of its AAs. Therefore, his imaginary ideas about fitness valleys make no sense at all. The sequences are surrounded by vast deserts, at least for what regards the specific information that makes them what they are. The fact (true) that part of the basic information is shared with other islands does not help at all. It's the new information that has to be built, not the 100 bits that are already present in other proteins. It's the concept of the complexity of a transition, which seems so difficult to understand for our interlocutors. Of course, also the basic information that is shared is rather complex, and needs explanation: but that part is probably older, and has a different evolutionary history: it is specific for the class of ATPases, but it is only a small part of the specific information in the alpha and beta chains of ATP synthase. A very small part. Instead, look at the extremely consistent conservation of the alpha and beta chains between bacteria and humans, that demonstrates how constrained these sequences are. And how specifically different from the corresponding sequences in other types of ATPases! Of course, Rumracket can describe all kinds of imaginary landscapes: who can contradict pure imagination? He is writing fairy tales, and realism has never been the best inspiration for that kind of things. gpuccio
GPuccio @75 DNA-Jock does not distinguish between a definition of ATP synthase and a specification of ATP synthase. The latter would be about the function of ATP synthase, while the first would be among other things about its sequences. IOWs the definition of ATP synthase is closely related to what can be called "the outcome" or the result. So, if one uses the definition of ATP synthase as the specification, then one commits the TSS fallacy. DNA-Jock fails to make this distinction. So, whenever you discuss the definition of ATP synthase he immediately thinks that you paint fresh bullseyes. This is an obstacle for the discussion. Here you explain the difference between definition and specification very clearly:
GP: Let’s go to proteins. If I look at the protein and I say: well, my specification is: a 100 AAs protein with the following sequence: …
For clarity: this could be termed a definition of the protein.
GP: ... then I am painting a target, because I am using a sequence that has already come out. That is not correct, and I am committing the TSS fallacy.
Indeed. And this is what DNA-Jock thinks is happening when you discuss the definition of ATP synthase. He does not understand that the definition of ATP synthase is not its specification. Please explain to DNA-Jock how you make a specification.
GP:... if I see that the protein is a very efficient enzyme for a specific biochemical reaction, and using that observation only, and not the specific sequence of the protein (and I can be completely ignorant of it), I define my function as: “a protein that can implement the following enzymatic reaction” (observed function) at at least the following level (upper tail based on the observed function efficiency)” ...
Aha! So, that is a specification! Did you get that DNA-Jock?
GP:... then my post-specification is completely valid. I am not committing any TSS fallacy. My target is a real target, and my probabilities are real probabilites.
Origenes
gpuccio Here is a post from Rum at TSZ. colewd: If an amino acid change causes purifying selection then that amino acid is important to the organisms function.
, that isn’t a subject of contention here. But “It is under purifying selection” =/= “It exists nowhere else in sequence space”. For all we know, there could be a hill with a similar function in the immediate vicinity, with a narrow but deep fitness valley (wrt to ATPase/ATP synthase function= separating the other hill from the existing one. You can’t actually know that this isn’t the case without empirically exploring that surrounding space. The detection of purifying selection for a particular function can at most indicate that there is a hill surrounded by fitness valleys for that function. It does not say ANYTHING about the density by which hills with similar functions exist in sequence space. But interestingly, what we know from the existance of the P-loop NTPase superfamily is that, while it might be the case that the ATPase/ATP synthase function is surrounded by a fitness valley, it is also surrounded by hills constituting other functions.
bill cole
uncommon_avles:
I have no hesitation in acknowledging that TSS fallacy does not apply to the bricks analogy if you assume the green was painted and not due to moss or the bullets were not smart bullets which were seeking green colour targets.
Good. And smart bullets would be designed, I suppose.
However, how can you be sure that TSS fallacy does not apply to biological system, because we have no way of knowing if the structure you are looking at has evolved from other structures or due to processes (akin to ‘natural’ smart bullet) which has yet to be discovered?
Because the targets are real targets, therefore no TSS apllies. We can compute the probability of finding real targets in a real random system. That's not TSS. Other explanations, which are no trandom but imply some role of necessity, can be considered, of course, in the measure that they are avalable and reasonable. That's is part of ID too (see Dembski's explanatory filter). But computing the probability of finding a target by chance is necessary to exclude a random origin. You can find a wide discussion of the role and limitations of NS here: What are the limits of Natural Selection? An interesting open discussion with Gordon Davisson https://uncommondescent.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/ They don't include "natural smart bullets", of course! :)
Unless we exhaust all reasoning we can’t come to a conclusion.
This is simply wrong. Science is made by reasoning about what we know, the facts, and not about "things that have yet to be discovered", and that are not supported by any known fact.
A doctor looks at symptoms and prescribes medicine. If he is unable to identify any specific bacteria causing the disease, he prescribes a broad spectrum antibiotic – he doesn’t say the disease is due to unnatural phenomenon or due to patients past sin, because he hasn’t studied and eliminated all the possible reasons for the cause of the disease.
That's exactly my point, not yours. The doctor gives the most reasonable explanation (an infectious cause), even if he cannot identify the exact etiology in that case. But he does not consider other "possible reasons" like past sins or whatever, because there is no fact that suggest that those imaginary explanations have any merit in this case. The problem is not that they are "unnatural" (past sins can certainly cause diseases!). The problem is that they have no explanatory power in the scenario we are observing. If the observed disease were liver insufficiency, he would certainly consider the "past sin" of voluntarily drinking too much as a possible cause, with good explanatory power. gpuccio
bill cole: "I think that Jock’s tactic is to try to win an argument by creating confusion and trying to appear to be the authority. What do you think?" The same thing. DNA_Jock is intelligent and competent, but he is also arrogant and obsessed by his own ideas, and he cannot accept that he is wrong even when he is obviously wrong. That's not good. gpuccio
Bill et al, Rumracket always falls back to the "right" mutations. Mutations that we have no idea what they were. For examples voles didn't get the right mutations and remained voles. But the "right" mutations would have transformed them into something other than voles. And so it is with proteins and protein machines. And that is their "argument"- all the while ignoring the two mutation issue ET
kairosfocus: Thank you for the great contribution in few words! :) You are perfectly right: the important point is not the absence of other needles (that in principle cannot be excluded, and in many cases can be proved), but the fact that they are still needles in a haystack. IOWs, the existence of alternative complex solutions does not have any relevant effect on the computation of the improbability of one individual needle. It's the functional specificity of each individual needle that counts. That's what I have tried to argue with my discourse about time measuring devices. Evoking only ridiculous answers from DNA_Jock, who probably really believes that the existence of water clocks and candle clocks makes the design inference for a watch a TSS fallacy! Indeed, he seems so certain that we are "painting" the function of measuring time around the random object that is our watch! Any solution that is highly specific is designed. We have absolutely no counter-examples in the whole known universe. The TSS fallacy is often invoked (correctly) in scientific reasoning and statistical analysis in cases where a false clustering is inferred without a correct probabilistic analysis. IOWs, TSS fallacies are an example of seeing forms in clouds, like in the classic "Methinks it's like a weasel" situation. All of us have seen forms in clouds, but of course we don't make a scientific argument to say that they are designed. The error is in giving meaning to clusters that, given the probabilistic resources of the system we are observing, are probably only random configurations. But that does not mean that all clusterings are wrong. If we observe a very strong clustering, with a p-value lower, for example, than 10^-16, we can be rather confident that a real cluster is there. And if there are two more clusters, equally significant, that does not mean that we are committing the TSS fallacy, as DNA_Jock seems to believe: it just means that there are really three significant clusters, and that all of them need explanation. And, of course, we can always change our definition of the cluster, making it more specific or less specific, IOWs tracing bigger or smaller circles around the concentration of data. What happens if we do that? Of course, if we trace a circle that is too big, we dilute the observed effect. No scientist with sense would do that. And if we trace a circle that is too small, we loose significant data: the statistic significance becomes lower. No scientist with sense would do that. So, what do scientistists with sense do, all the time? They trace the circle that fits the data best, and they compute the upper tail for the observed effect in the probability distribution that describes the system as a random system. And if the probability of the upper tail ir really small, they reject the null hypothesis and consider the cluster a real cluster, which needs to be explained. But according to DNA_Jock, nothing of that is correct: it's all TSS fallacy, because we can trace bigger or smaller circles, so of course we are painting false targets! And I suppose that, in his opinion, Principal Components Analysis and all forms of unsupervised learning are completely useless procedures, TSS fallacies them too. gpuccio
GP, very well argued, a case of a sledgehammer vs a peanut in the shell. It is sadly revealing that so many find it so hard to recognise that once a haystack is big enough and (a) needles are deeply isolated, with (b) tightly limited resources that constrain search to a negligible fraction of the space of possibilities then, there is no plausible blind solution to the island of function discovery challenge. At 500 - 1,000 bits for a space of possibilities, we are looking at 3.27*10^150 to 1.07*10^301 possibilities. With an observable cosmos [that's the border between science and metaphysical speculation] of ~ 10^80 atoms and ~ 10^17 s with fast reaction rates ~ 10^12 - 15/s, sol system and cosmos scale searches fall to negligible proportions. And if we are talking earth's biosphere, such kicks in much earlier. Where, relevant functionality is configuration based [thus, a binary description language based on structured Y/N q's is WLOG] and contextually dependent. If you doubt the latter, I just had a case in Ja where a US$ 200+ MAF -- a SECOND time -- with the alleged right part number i/l/o the year and model, was not right. Such functionality is also separately observable and recognisable as configuration-dependent. For instance, we can perturb and see loss of function [hence, rugged landscape issues]. Not that mere facts and logic will suffice for those determined not to see the cogency of a point. Comparative: in the 1920's the Bolsheviks set out on central planning. Mises highlighted the breakdown of ability to value and the resulting incoherence of excessively centralised planning, almost instantly. Sixty years and coming on 100 million lives later it collapsed. A sobering lesson. KF kairosfocus
gpuccio @ 58 I have no hesitation in acknowledging that TSS fallacy does not apply to the bricks analogy if you assume the green was painted and not due to moss or the bullets were not smart bullets which were seeking green colour targets. However, how can you be sure that TSS fallacy does not apply to biological system, because we have no way of knowing if the structure you are looking at has evolved from other structures or due to processes (akin to 'natural' smart bullet) which has yet to be discovered? Unless we exhaust all reasoning we can't come to a conclusion. A doctor looks at symptoms and prescribes medicine. If he is unable to identify any specific bacteria causing the disease, he prescribes a broad spectrum antibiotic - he doesn't say the disease is due to unnatural phenomenon or due to patients past sin, because he hasn't studied and eliminated all the possible reasons for the cause of the disease. uncommon_avles
gpuccio Has Rumracket even thought that the reason why the beta chain of ATP synthase and those different chains from other ATPases are so divergent is simply that they are different proteins that do different things, even if with some common basic plan? I think that Rum is just coming up to speed on the argument. His opinion will move as he comes up to speed. Jock has a very committed position to TSS. This is nothing more then trying to say your method of determining functional information in the form of bits is bogus. I went through the old arguments the best I could to try and figure out how he got so dug in the the TSS straw-man. It turns out REC started it by expanding your ATP synthase beta and alpha to ATP function in general. That was the straw-man that Jock pivoted on to bring up the TSS. You made some convincing arguments but Jock had become irrationally committed. At the end of the day REC was saying you could not measure functional information because the functional space/sequence space ratio is unknowable. We are getting closer to understanding this as new data surfaces. The Hayashi paper is strong support for your position. I think it may turn out that your conserved sequence test is quite workable. I think that Jock's tactic is to try to win an argument by creating confusion and trying to appear to be the authority. What do you think? bill cole
gpuccio- The "argument" against Biological functionality is specified information is that we only know about the functionality because we observe it after the fact (as if science isn't done via observation and trying to figure out what we are observing). Seriously. ET
gp@2
Design can in a sense be considered a “law”: in the sense that it connects subjective representations and subjective experiences to an outer result. However, design is not a law in the sense of being a predictable regularity. Its cognitive aspect is based on understanding of meanings, including laws, but its intentional part is certainly more unpredictable.
Just "in a sense"? Assuming you don't see the creator in action, how would you know something was designed other than by observing it follows certain laws? What do you mean: "intentional part is certainly more unpredictable"?
Random configurations do exist in reality, and the only way we can describe them is through probabilistic models.
What do you mean? I would say there is no pure randomness in nature - there's always a mix of random and nonrandom (law/design) - say you have a black box that emits radioactive decay particles - if you observe these outputs long enough you can be quite certain (probabilistic) about the element inside the box - that is the deterministic component of that experiment. Because of the inherent mix of randomness and design, you don't need to prove every single little thing is designed - just showing an element of design makes the whole thing designed. But again, we do not observe the designer at work - we only see the laws followed by the biologic systems. A regular six-face dice has zero probability of an outcome higher than six because the system has been so designed. It doesn’t matter if the determinism comes before the random event (as discussed) or after as in a twelve-face dice that is rolled again by an agent that seeks only one to six outcomes. In biology, if non-random “natural selection” acts upon “random mutations”, then the outcome is non-random - hence designed - as we observe in nature. There should be no doubt that Someone designs and builds the dice, Someone designs and builds the random generator (not easy), and Someone designs and runs the whole experiment.
Finally, let’s say that if it looks desinged, we must certainly seriously consider that is could really be designed. But there are cases of things that look designed and are not designed. Therefore, we need rules to decide in individual cases, and ID theory is about those rules.
Example? Based on outcome, randomness is impossible to determine (it's simple math) http://nonlin.org/random-abuse/ . On the other hand, even a 10-bit sequence has a mere o.1% probability, so probabilities get extreme very quickly. Probabilities of randomness in biology are ridiculously low, therefore not even worth seriously discussing. Nonlin.org
ET: "Biological functionality is specified information." Of course it is. gpuccio
Specification isn't too complicated for those not on an agenda:
Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. In virtue of their function, these systems embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the same sense required by the complexity-specification criterion (see sections 1.3 and 2.5). The specification of organisms can be crashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of minimal function of biochemical systems.- Wm. Dembski page 148 of NFL
In the paper "The origin of biological information and the higher taxonomic categories", Stephen C. Meyer wrote:
Dembski (2002) has used the term “complex specified information” (CSI) as a synonym for “specified complexity” to help distinguish functional biological information from mere Shannon information--that is, specified complexity from mere complexity. This review will use this term as well.
Biological functionality is specified information. ET
bill cole: One more interesting fact. I have taken one of the best characterized human ATPase subunits that have some hit with the human form of beta chain of ATP synthase: ATPase, H+ transporting, lysosomal 56/58kDa, V1 subunit B1 (Renal tubular acidosis with deafness), isoform CRA_b [Homo sapiens] Length 471 AAs Homology with the human beta chain: 115 bits. I have blasted its sequence against proteobacteria, and the best hit is 549 bits and 59% identity. IOWs, this different ATPase is very conserved too, from bacteria to humans. IOWs, the "beta chain of ATP synthase" and the "ATPase, H+ transporting, lysosomal 56/58kDa, V1 subunit B1" are two different proteins with some basic homology and different functions, and each of them is highly conserved from bacteria to humans. So, we have here two different but individually conserved sequences. The simple truth is that sequence is related to function, and to specific function, something that both Rumracket and DNA_Jock try desperately to deny or obfuscate. gpuccio
Origenes: Correct! But even just shooting the green bricks would be sufficient. :) You are really asking a lot of our shooter! :) gpuccio
bill cole: There is something really strange in their logic. Follow me. The beta chain in Alveolata would be, in their opinion, a clear example of an independent solution, of some unrelated peak that evolution did find, because, of course, there are so many of them! But it has 757 bits of homology and 65% identity with the human sequence, after a separazion of maybe more than 1 billion years. Which is practically no kdivergence at all. But they are not interested in that simple fact, they don't even acknowledge it after it has been explicitly put under their closed eyes. On the other hand, Rumracket's distant parent proteins, with little more than 100 bits of homology and a maximum of 21.5% identity in the same species (homo sapiens), are in their opinion a demonstration that the protein is ubiquitous but can diverge in sequence! Am I missing something? Has Rumracket even thought that the reason why the beta chain of ATP synthase and those different chains from other ATPases are so divergent is simply that they are different proteins that do different things, even if with some common basic plan? Or that the reason why the beta chain of humans and the beta chain of Alveolata and the beat chain of bacteria are so strikingly similar, even after billions of years of separation, is simply that they have the same function, and that a very high specific information in bits is required for that specific function? But probably I am asking too much of him! gpuccio
If, post hoc, a specification can be based on other observed properties than the outcome, then we have a valid post-specification.
“Do you see yonder cloud that’s almost in shape of a camel?”
Suppose that we inspect the wall, after it has been shot 100 times, and discern that the shots form a text-pattern (e.g. ‘Do you see yonder cloud that’s almost in shape of a camel?’). Then this observation would offer us a basis for a specification independent from the outcome. Or does it? This is a crucial question: is this text-pattern based on the outcome? The answer is a resounding “No.” Because the outcome is to be regarded as a collection of distinct results from separate random shots. Origenes
gpuccio
Rumraket simply does not understand the argument, and the role of sequence conservation in it. He is denying a functional specificity which should be obvious to anyone.
Agree He is struggling to directly defeat the argument so he is trying to set up a STRAW-MAN probably not realizing he is doing this. Your argument is using ATP synthase alpha and beta. He is trying to make it about super families like AAA+ thus changing your argument. Like Jock he is adding bullets to the wall. We can name this the TSSM or the Texas sharpshooter straw-man :-) bill cole
gpuccio
Demonstrating so that he has not understood at all why I use that methodology.
This is the point. If he does not address your methodology of calculating information bits, he is failing to address your argument. He accuses of you putting a target around your bullets but in reality he has been trying to add bullets to your wall. You have not cherry picked any bullets which is what the TSS is all about. By adding bullets to the wall he is creating a STRAW-MAN argument. He has committed a straw-man fallacy which is a very common tactic used against ID. The burden is on Jock to show that your 500 bit calculation is wrong. Since this is difficult he was instead trying to discount your argument as a fallacy and ultimately he failed as he had to use commit a logical fallacy in order to challenge your argument. bill cole
bill cole: Rumracket is, again, making a false argument. He has not even understood what I am talking about. I have blasted the beta chain of human ATP sinthase against all human proteins. Of course, it has 1061 bits of homology with itself (identity). Let's remember that it also has 663 bits of homology with the same protein in e. coli. Well, do you know how much is its homology with any other human protein, including all those "related" proteins in other kinds af ATPases that Rumrock mentions? In humans? The highest hit is 157 bits, folowed by a 148 bits, then a group of 115 bits values. It has, as already said, 94.7 bits homology with its sister protein, the alpha chain. IOWs. all these proteins are somewhat related, and they share about 100+ bits of homology among themselves. But the beta chain shares 663 bits of homology with its specific bacterial counterpart, 506 bits more than what it shares with its nearest homologue in the human proteome. Therefore, the beta chain (and the alpha too) are specific proteins, different from all the others mentioned by Rumracket. They are only found in ATP synthase, both the classical form and the Alveolata variant, and they are always extremely conserved. Rumracket simply does not understand the argument, and the role of sequence conservation in it. He is denying a functional specificity which should be obvious to anyone. gpuccio
gpuccio: DNA_Jock is shouting, after all. Over the fence. Again, re-reading the past in his own way. He is right on one thing, however: in our past discouse (withnhim and REC) that he quotes now, I had accepted as true REC's statement that ATP synthase in Alveolata was divergetnt. At the time, I did not check his statement in detail, a statemtn that was assumed by DNA_Jock too, so much so that he has been using it again in his recent comments about the ubiquitin thread. That is my sin, it seems: to have accepted a statmente apparently against my argument, as made by two opponents. Two opponents who, while making that statement and using it "against" me, has not even checked if their argument was correct. But of course, the sin is mine, not theirs. Well, the argument is simply wrong. I have discovered that simple fact readin the paper linked by REC (at that time) and by DNA_Jock (now), apparently without having read it. The alpha and beyta chains, the only proteins that I have used in my argument, are not among those that are mentioned as "divergent" in thir quoted paper. That's what the authors of their paper say, as I have quoted. My paiwise comparison is something that I did additionally, just to be sure, this time, that what was being said is true. But against, it is my sin, not theirs. As it is pribably my sin that DNA_Jock has simply ignored this obvious and proven facts, that Entropy has argued agaiins about the inaginary difference in Alvelolata chains that would prove my argument wrong, and so on. Always my sin, of course. But DNA_Jock is not satisfied with that. Ge raises again (or simply copies) a different objection, that has nothing to do with the previous one, but you know, it's always better to add up anything possible, when we are wrong! So, he states again that the chains are not really so conserved, because I should have aligned hnfreads of proteins, and not three. Demonstrating so that he has not understood at all why I use that methodology. Well, I have answered that objection at comment #256 in the English language thread, where most of this old discussion took place: https://uncommondescent.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/ I could answer it again now, and with more details, but why? Let's the past re-read itself. He says: "We are not. We are having a discussion about the tendency of humans to apply paint in a TSS manner, because they believe that this is the only way it could be." Well, he is the only one doing that, as far as I can see. I am not certainly doing that, because the case of what I would call "the TSS fallacy fallacy" is closed, for me. Simply stating the same wring things is not a discussion, and DNA_Jock is simply doing that (indeed, he does not even re-state them: he just point to his old comments). Just to conclude: it is absolutely true that the alpha and beta chians of ATP synthase in Alveolata have nothing specail: they are highly conserved, like in all other organisms. And DNA_Jock does not even have the fairness to admit it. gpuccio
gpuccio He is pointing out that there are many uses of ATP synthase type proteins other then ATP production. I am not sure this challenges your argument however it is interesting information that helped me understand the broad uses of these motor proteins and how they are related. He maybe trying to make that case for lots of function although once you have identified 500 bits of information the "lots of function" argument becomes challenging as you pointed out in this op. bill cole
gpuccio Here is an additional link from Rum. Here’s a nice link: InterPro Homologous Superfamily: P-loop containing nucleoside triphosphate hydrolase (IPR027417) bill cole
gpuccio here is a paper that is relevant to Rumraket's discussion. Review Structure and function of the AAA+ nucleotide binding pocket? Author links open overlay panelPetraWendlerSusanneCiniawskyMalteKockSebastianKube Show more https://doi.org/10.1016/j.bbamcr.2011.06.014 bill cole
bill cole: I don't understand. Is Rumracket quoting something, or just saying things that he thinks? Just to know. gpuccio
Instead of using a wall of bricks as an analogy maybe we should think of Lego bricks or blocks. Is the following interconnected set of Lego blocks the result of chance or intelligence? Could it be a result of pure chance or just “dumb luck?” Does anyone have any doubt? https://cdn.frugalfun4boys.com/wp-content/uploads/2015/01/Simple-Legos-27-Edited.jpg Think of amino acids as a set of just twenty supercharged Lego blocks out of which you can build really functioning motors, power generators, assembly robots and data processing systems (to name just a few of the functions find going on inside a living cell.) john_a_designer
gpuccio A comment from Rumraket that adds value.
Rumraket April 16, 2018 at 6:49 pm Ignored The alpha and beta subunits from F-type ATP synthases that gpuccio is obsessing about belong to a big family of haxameric helicases. They are WILDLY divergent in sequence over the diversity of life, and many of them are involved in other processes and functions that have nothing to do with ATP synthase/ATPase. Besides the structural similarities, they all seem to be involved in many different forms of DNA or RNA nucleotide/ribonucleotide processing (such as unwinding of double stranded DNA or RNA), of which NTP hydrolysis or synthesis as observed in ATP synthase, is just one among these many different functions. So not only are they divergent in sequence in ATP synthase machines, versions of the structure is part of many other functions besides ATP hydrolysis and synthesis. Which evolved from which, or do they all derive from a common ancestral function different from any present one? We don’t know. But we know that both the sequence and functional space of hexameric helicases goes well beyond the ATP synthase machinery. Their capacity to function as an RNA helicase could be hinting at an RNA world role.
bill cole
uncommon avles- It isn't just the right proteins. You need them in the correct concentrations, at the right time and gathered at the right place. The assembly of any flagellum is also IC. Then there is command and control without which the newly evolved flagellum is useless. ET
GP, I love your posts and you make great points. I have long concluded that the opposition to ID is not based on science and reason but extreme emotion. tribune7
To all: I have just posted this comment in the Ubiquitin thread. I think it is relevant to the discussion here, too, because E3 ligases are one of the examples proposed by DNA_Jock. So, I copy it here too: This recent paper is really thorough, long and detailed. It is an extremely good summary about what is known of the role of ubiquitin in the regulation of the critical pathway of NF-?B Signaling, of which we have said a lot during this discussion: The Many Roles of Ubiquitin in NF-kB Signaling http://www.mdpi.com/2227-9059/6/2/43/htm I quote just a few parts:
Abstract: The nuclear factor kB (NF-kB) signaling pathway ubiquitously controls cell growth and survival in basic conditions as well as rapid resetting of cellular functions following environment changes or pathogenic insults. Moreover, its deregulation is frequently observed during cell transformation, chronic inflammation or autoimmunity. Understanding how it is properly regulated therefore is a prerequisite to managing these adverse situations. Over the last years evidence has accumulated showing that ubiquitination is a key process in NF-kB activation and its resolution. Here, we examine the various functions of ubiquitin in NF-kB signaling and more specifically, how it controls signal transduction at the molecular level and impacts in vivo on NF-kB regulated cellular processes. — Importantly, the number of E3 Ligases or DUBs mutations found to be associated with human pathologies such as inflammatory diseases, rare diseases, cancers and neurodegenerative disorders is rapidly increasing [22,23,24]. There is now clear evidence that many E3s and DUBs play critical roles in NF-kB signaling, as will be discussed in the next sections, and therefore represent attractive pharmacological targets in the field of cancers and inflammation or rare diseases. — 3.3. Ubiquitin Binding Domains in NF-kB Signaling Interpretation of the “ubiquitin code” is achieved through the recognition of different kinds of ubiquitin moieties by specific UBD-containing proteins [34]. UBDs are quite diverse, belonging to more than twenty families, and their main characteristics can be summarized as follows: (1) They vary widely in size, amino acid sequences and three-dimensional structure; (2) The majority of them recognize the same hydrophobic patch on the beta-sheet surface of ubiquitin, that includes Ile44, Leu8 and Val70; (3) Their affinity for ubiquitin is low (in the higher µM to lower mM range) but can be increased following polyubiquitination or through their repeated occurrence within a protein; (4) Using the topology of the ubiquitin chains, they discriminate between modified substrates to allow specific interactions or enzymatic processes. For instance, K11- and K48-linked chains adopt a rather closed conformation, whereas K63- or M1-linked chains are more elongated. In the NF-kB signaling pathway, several key players such as TAB2/3, NEMO and LUBAC are UBD-containing proteins whose ability to recognize ubiquitin chains is at the heart of their functions. — 9. In Vivo Relevance of Ubiquitin-Dependent NF-kB Processes NF-kB-related ubiquitination/ubiquitin recognition processes described above at the protein level, regulate many important cellular/organismal functions impacting on human health. Indeed, several inherited pathologies recently identified are due to mutations on proteins involved in NF-kB signaling that impair ubiquitin-related processes [305]. Not surprisingly, given the close relationship existing between NF-kB and receptors participating in innate and acquired immunity, these diseases are associated with immunodeficiency and/or deregulated inflammation. 10. Conclusions Over the last fifteen years a wealth of studies has confirmed the critical function of ubiquitin in regulating essential processes such as signal transduction, DNA transcription, endocytosis or cell cycle. Focusing on the ubiquitin-dependent mechanisms of signal regulation and regulation of NF-kB pathways, as done here, illustrates the amazing versatility of ubiquitination in controlling the fate of protein, building of macromolecular protein complexes and fine-tuning regulation of signal transmission. All these molecular events are dependent on the existence of an intricate ubiquitin code; that allows the scanning and proper translation of the various status of a given protein;. Actually, this covalent addition of a polypeptide to a protein, a reaction that may seem to be a particularly energy consuming process, allows a crucial degree of flexibility and the occurrence of almost unlimited new layers of regulation. This latter point is particularly evident with ubiquitination/deubiquitination events regulating the fate and activity of primary targets often modulated themselves by ubiquitination/deubiquitination events regulating the fate and activity of ubiquitination effectors and so on. — To the best of our knowledge the amazingly broad and intricate dependency of NF-kB signaling on ubiquitin has not been observed in any other major signaling pathways. It remains to be seen whether this is a unique property of the NF-kB signaling pathway or only due to a lack of exhaustive characterization of players involved in those other pathways. Finally, supporting the crucial function of ubiquitin-related processes in NF-kB signaling is their strong evolutionary conservation.
Emphasis mine. The whole paper is amazingly full of fascinating information. I highly recommend it to all, and especially to those who have expressed doubts and simplistic judgments about the intricacy and specificity of the ubiquitin system, in particular the E3 ligases. But what’s the point? They will never change their mind. gpuccio
tribune7: Of course. The key concept is always the complexity that is necessary to implement the function. A very interesting example to understand better the importance of the functional complexity of a sequence, and why complexity is not additive, can be foun in the Ubiquitin thread, in my discussion with Joe Felsestein, from whom we are waiting for some more detailed answer. It's the thief scenario. See here: The Ubiquitin System: Functional Complexity and Semiosis joined together. https://uncommondescent.com/intelligent-design/the-ubiquitin-system-functional-complexity-and-semiosis-joined-together/#comment-656365 #823, #831, #859, #882, #919 I paste here, for convenience, the final summary of the mental experiment, from comment #919 (to Joe Felsestein):
The thief mental experiment can be found as a first draft at my comment #823, quoted again at #831, and then repeated at #847 (to Allan Keith) in a more articulated form. In essence, we compare two systems. One is made of one single object (a big safe). the other of 150 smaller safes. The sum in the big safe is the same as the sums in the 150 smaller safes put togethjer. that ensures that both systems, if solved, increase the fitness of the thief in the same measure. Let’s say that our functional objects, in each system, are: a) a single piece of card with the 150 figures of the key to the big safe b) 150 pieces of card, each containing the one figure key to one of the small safes (correctly labeled, so that the thief can use them directly). Now, if the thief owns the functional objects, he can easily get the sum, both in the big safe and in the small safes. But our model is that the keys are not known to the thief, so we want to compute the probability of getting to them in the two different scenarios by a random search. So, in the first scenario, the thief tries the 10^150 possible solutions, until he finds the right one. In the second scenario, he tries the ten possible solutions for the first safe, opens it, then passes to the second, and so on. A more detailed analysis of the time needed in each scenario can be found in my comment #847. So, I would really appreciate if you could answer this simple question: Do you think that the two scenarios are equivalent? What should the thief do, according to your views? This is meant as an explicit answer to your statement mentioned before: “That counts up changes anywhere in the genome, as long as they contribute to the fitness, and it counts up whatever successive changes occur.” The system with the 150 safes corresponds to the idea of a function that include changes “anywhere in the genome, as long as they contribute to the fitness”. The system with one big safe corresponds to my idea of one single object (or IC system of objects) where the function (opening the safe) is not present unless 500 specific bits are present.
gpuccio
O & GP If so, Miller needs to explain his position, since Miller assigns the same probability to pre-specified and post-specified events. He lumps those two categories together. If a specification is valid, only the complexity of the specification matters for a design inference. If the complexity is the same, there is absolutely no difference between a pre-specification and a valid post-specification. Suppose after dealing a deck all day, one particular sequence occurs which causes the lights to come on, music to start playing and a clown to come in with a cake. Could that reasonably considered to be a chance event? Any arrangement of the chemistry of the genetic code has a equally minuscule probability but only if it is done in a specific way does something happen. tribune7
Tribune7 and Origenes: As I have argued at #36, pre-specifications are always valid, and they can use any contingent information, because of course that contingent information does not derive from any random event that has already happened. Post-specifications, instead, are valid only if they are about objective properties and if they don't use any already existing contingent information. If a specification is valid, only the complexity of the specification matters for a design inference. If the complexity is the same, there is absolutely no difference between a pre-specification and a valid post-specification. gpuccio
Tribune7 @57
T7: Now, imagine the exact sequence had been predicted before hand. Would they still say it was by chance?
I do like your 'simple' question. Indeed, suppose a card dealer successfully specifies beforehand which cards Miller will get, would Miller accept this as a chance event? What if the card dealer gets it right every time all day long? What would Miller say? That, without design, this is impossible, perhaps? If so, Miller needs to explain his position, since Miller assigns the same probability to pre-specified and post-specified events. He lumps those two categories together. And here lies Miller's obvious mistake. In his example (see quote in #33) he smuggles in the specification. Origenes
To all: Not much at TSZ. Entropy continues to confound the problem of TSS fallacy with the problem of alternative solutions. I have discussed them both in the OP, but he seems not to be aware of that. Just to help him understand: a) The problem of the TSS fallacy is: is the post-hoc specification valid, and when? I have abswered that problem very clearly: any post-hoc specification is valid if the two requisites I have described are satisfied. In that case, there is no TSS fallacy. My two requisites are always satisfied in the ID inferences, therefore the TSS fallacy does not apply to the ID inference. b) Then there is the problem of how to compute the probability of the observed function. Entropy thinks that this is too part of the TSS fallacy, because he follows the wrong reasoning of DNA_Jock. But that has nothing to do with the fallacy itself. At most, it is a minor problem of how to compute probabilities. I have clearly argued that with huge search spaces, and with highly complex solutions, that problme is irrelevant. We can very well compute the specificity of the observed solution, and ignore other possible complex solutions, which would not change the result in any significant way for our purposes. DNA_Jock and Entropy can disagree, but I have discussed the issue, and given my reasons. Everyone can judge for himself. Then there is the issue of the level at whuch the function must be defined. I have clearly stated that the only correct scientific approach is to define as rejection region the upper taile of the observed effect, as everybody ddoes in hypothesis testing. DNA_Jock does not like my answer, but he has not explained why. He also gives cryptic allusions to some different argument that I could have used, but of course he does not say what it is. And, of course, I suppose that he laughs. Good for him. Finally, Entropy, like DNA_Jock, seems not to have understood the simple fact that the alpha and beta chains of ATP synthase have the same conserved sequence in Alveolata as in all other organisms. Could someone please explain to these people that I have discussed that issue in the OP, with precise references from the literature that they had linked? If they think that I am wrong, I am ready to listen to their reasons. gpuccio
bill cole: Thank you for the kind words! :) I am looking forward to Joe Felsestein's clarifications. He seems to be one of the last people there willing to discuss reasonably. gpuccio
Origenes at #54: "Yes that is exactly what he does. It is Ken Miller’s mistake all over again." Yes, sometimes our kind interlocutors really help us. Seriously, I am really amazed that they are still using the infamous deck of cards fallacy! What's wrong in their minds? At least DNA_Jock has avoided that intellectual degradation. At least up to now... :) gpuccio
uncommon_avles: Thank you for your comment. What you say is not really connected to the discussion here, but I will answer your points.
I don’t think analogy of bricks and bullets works.
As already discussed with Origenes at #31, #36 and #50, the bricks analogy in the OP has only one purpose: to ahow that there is a class of systems and events to which the TSS fallacy does not apply. The wall with the green bricks and the protein functions both belong to that class. for exactly the same reasons, that I have explicitly discussed (my two requirements), as detailed in the OP and at #36:
1) The function is recognized after the random shooting (whatever it is), and certainly its explicit definition, including the definition of the levels observed, depends on what we observe. In this sense, our definition is not “independent” from the results. But the first important requisite is that the function we observe and define must be “related to an objectively existing property of the system”. IOWs, the bricks were green before the shooting (we are not considering here the weird proposal about moss made by uncommon_avles at #32). In the case of protein functions, the connection with objectively existing properties of the system is even more clear. Indeed, if bricks could theorically be ppainted after the shooting, biochemical laws are not supposed to come into existence after the proteins themselves. At least, I hope that nobody, even at TSZ, is suggesting that. So, our first requisite is completely satisfied. 2) The second important requisite is that we must “make no use of the specific details of what is observed to “paint” the function”. This is a little less intuitive, so I will try to explain it well. For “specific details” I mean here the contingent information in the result: IOWs, the coordinates of the shots in the case of the wall, or the specific AAs in the sequence in the case of proteins. The rule is simple, and universally appliable: if I need to know and use those specific contingent values to explicitly define the function, then, and only then, I am committing the TSS fallacy.
This is the purpose of the analogy. For that purpose, it is perfectly appropriate. For the rest, of course, it is not a model of a biological system. Then you say:
The point is, in biological process, you need to take time factor and incremental probability into account.
I don't know what you mean by "incremental probability", I suppose you mean Natural Selection. Of course I take into account the time factor and NS in all my discussions about biological systems. You can find my arguments here: What are the limits of Natural Selection? An interesting open discussion with Gordon Davisson https://uncommondescent.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/ and here: What are the limits of Random Variation? A simple evaluation of the probabilistic resources of our biological world https://uncommondescent.com/intelligent-design/what-are-the-limits-of-random-variation-a-simple-evaluation-of-the-probabilistic-resources-of-our-biological-world/ In the second OP, just at the beginnign, you can find a Table with the computation of the rpobabilistic resources of biological systems on our planet, for its full lifetime. All those points of course are extremely important, and I have discussed them in great detail. But they have nothing to do with the TSS fallacy argument, which is the issue debated here. So, we have discovered two great truths: a) Any analogy has its limitations b) I cannot discuss everything at the same time
The green colour of brick might not be paint. It might be moss formed over a few months’ time.
As already said, your point is wrong and not pertinent. there are of course ways to distinguish between a painted brick and moss. The important question is: is the propert I am using to recognize the function post-hoc an objective property of the system, or am I inventing it now? For protein function, there is absolutely no doubt: the function of a protein is the strict consequence of biochemical laws. I am inventing neither the laws nor the observed function. They are objective properties of the system of our observable universe. So, while you can still have some rather unreasonable doubt for the green brick (the "moss" alternative), there can be no doubt for the protein function. Your idea is probably that the protein could have acquired the function gradually, by a process of RV and NS. But that is not an argument about the TSS fallacy, as I have already explained. The problem here is: can we reject the null hypothesis of a random origin using a specification post-hoc? And the clear answer is: yes, of course, but we have to respect these two requirements (see above). The probabilistic analysis has only one purpose: to reject a random origin. Mechanisms like NS must be evaluated in other ways, considering what they can do and what they cannot do in the observed system. As I have done both in my previous OPs and here.
If you see a property of a biological system which seems improbable at first glance, you should consider the fact that the property might have evolved over time from other dissimilar properties.
Of course, and I have done that a lot of times. But, as said, that has nothing to do with the TSS fallacy. The problem in the TSS fallacy is: is the property I an using in my reasoning an objective property, for which I can build a probabilistic analysis of the hypothesis of a random origin (of course also considering, if apporpriate, the role of necessity factors, like NS), or is it a "painted" property, one that did not exist before observing what I am observing? You are conflating different arguments here. I have discussed all of them, but, as said before, not all at the same time. Finally you say:
Thus in the flagellum of the E. coli bacterium, there are around 40 different kinds of proteins but only 23 of these proteins are common to all the other bacterial flagella . Of these 23 proteins just two are unique to flagella. The others all closely resemble proteins that carry out other functions in the cell. This means that the vast majority of the components needed to make a flagellum might already have been present in bacteria before this structure appeared
This is the old (and wrong) argument against Irreducible Complexity. Again, it's another argument, and has nothing to do with the TSS fallacy. Moreover, I have not used IC in this OP and in this discussion as a relevant argument. My exanples are essentially about the functional complexity of single proteins, for example the alpha and beta chains of ATP synthase. But of course the system made by those two proteins together is certainly irreducibly complex. Each of the two proteins is powerless without the other. But each of the two proteins is also functionally complex of its own merit. However, the discussion here is not about IC. Again, you conflate different arguments without any reason to do that. gpuccio
Origenes, great point: Given some accuracy recording the outcome, everyone can perform the following cycle all day long: 1. deal cards. 2. make a “specification” based on the outcome. 3. see that outcome and specification match and express puzzlement. Now, imagine the exact sequence had been predicted before hand. Would they still say it was by chance? What if someone took that deck and rather than dealing them, just built a house of cards? Would they claim as within the realm of chance? tribune7
gpuccio @ 34, I don’t think analogy of bricks and bullets works. The point is, in biological process, you need to take time factor and incremental probability into account. The green colour of brick might not be paint. It might be moss formed over a few months’ time. If you see a property of a biological system which seems improbable at first glance, you should consider the fact that the property might have evolved over time from other dissimilar properties. Thus in the flagellum of the E. coli bacterium, there are around 40 different kinds of proteins but only 23 of these proteins are common to all the other bacterial flagella . Of these 23 proteins just two are unique to flagella. The others all closely resemble proteins that carry out other functions in the cell. This means that the vast majority of the components needed to make a flagellum might already have been present in bacteria before this structure appeared uncommon_avles
gpuccio
Moreover, he points to our old exchanges instead of dealing with my arguments here. Again, his choice. But I will not go back to re-read the past. I have worked a lot to present my arguments together, and in a new form, and I will answer only to those who deal with the things I have said here.
I looked over the old exchanges and his use of the TSS was fallacious. You are comparing protein sequence data over different species which seems to have nothing to do with the TSS fallacy. I am grateful that his challenge that got you to write this excellent op which was very educational for me especially the highlights you made on the Hayashi paper. Rumraket usually backs up his claims. I agree that his argument was based on a straw-man fallacy but honestly I think thats the best he can do. The data here is very problematic to the Neo-Darwinian position. The TSS claim was also a fallacy and a cleaver argument by Jock but again it misrepresented your claims. Joe Felsenstein said he would not comment on the TSS op but would write an op addressing your definition of information. I look forward to his op and hope that it generates a more productive discussion between UD and TSZ. From his lecture I do believe that he understands the challenge that genetic information brings to understanding the cause of the diversity of living organisms. Again, thank you so much for this clearly written op.:-) bill cole
GPuccio @53
GP: Rumracket is doing exactly that: he is using the specific contingent values in a post-hoc specification. So, he is committing a fallacy that ID never commits.
Yes that is exactly what he does. It is Ken Miller's mistake all over again. "What is the likelihood of that particular collection of mutations?", Rumracket asks. In return I would like to ask him: "What probability are you attempting to compute?" And as a follow-up question: “Are we talking about the probability that the outcome matches a specification informed by the outcome? If so, then the chance is 100%." Origenes
Origenes: Please. notice how Rumrachet aty TSZ has given us a full example of the fallacy I have described:
But that’s silly, because all sufficiently long historical developments will look unbelievably unlikely after the fact. To pick an example, take one of the lineages in the Long Term Evolution experiment with E coli. In this lineage, over 600 particular mutations have accumulated in the E coli genome over the last 25 years. What is the likelihood of that particular collection of mutations?
Emphasis mine. He is clearly violating my second fundamental requisite to avoid the TSS fallacy, as explained both in the OP and in my discussion with you at #36: "2) The second important requisite is that we must “make no use of the specific details of what is observed to “paint” the function”. This is a little less intuitive, so I will try to explain it well. For “specific details” I mean here the contingent information in the result: IOWs, the coordinates of the shots in the case of the wall, or the specific AAs in the sequence in the case of proteins. The rule is simple, and universally appliable: if I need to know and use those specific contingent values to explicitly define the function, then, and only then, I am committing the TSS fallacy." Rumracket is doing exactly that: he is using the specific contingent values in a post-hoc specification. So, he is committing a fallacy that ID never commits. A good demonstration of my point. Should I thank him? :) gpuccio
bill cole: As you have probably noticed, Rumraket: April 15, 2018 at 9:31 pm is just reciting again the infamous deck of cards fallacy. I will not waste my time with him, repeating what I have already said (see #35 here and #859 in the Ubiquitin thread). gpuccio
bill cole: I have read the comment by DNA_Jock. April 15, 2018 at 9:12 pm What a disappointment. Seriously. He does not want to discuss "over the fence". OK, his choice. Therefore, I will not address him directly, too. I can discuss over the fence, and I have done exactly that, but I don't like to shout over the fence to someone who has already declared that he will not respond. Moreover, he points to our old exchanges instead of dealing with my arguments here. Again, his choice. But I will not go back to re-read the past. I have worked a lot to present my arguments together, and in a new form, and I will answer only to those who deal with the things I have said here. He seems offended that I have added the: "ATP synthase (rather than ATPase)" clarification. Of course he will not believe it, but I have done that only to avoid equivocations. All the discussions here have been about ATP synthase, that I have always called by that name. The official name of the beta chain that I discuss (P06576) is, at Uniprot: "ATP synthase subunit beta, mitochondrial" Of course ATP synthase is also an ATPase, because it can work in both directions. But the term ATPase is less specific, because there are a lot of ATPases that are in no way ATP synthases. See Wikipedia for a very simple reference: ATPase https://en.wikipedia.org/wiki/ATPase So, it was important to clarify that I was of course speaking of ATP synthase, instead of intentionally generating confusion, as he has tried to do. He does not answer my criticism to his level of definiton argument (the things he says are no answer at all, as anyone can check). Again, his choice. But it is really shameful that he has not even mentioned my argument that his argument about my argument about the alpha and beta chains of ATP synthase is completely wrong. As I have said, the alpha and beta chains of ATP synthase are the same in Alveolata as in all other organisms. So he is wrong, I have clearly said why, quoting the same paper that he linked, and he does not even mention the fact. He is simply ridiculous about my argument regarding time measuring systems. "omits the water clock and the candle clock". I cannot believe that he says that! Just for the record, this is from the OP:
So, we wonder: are there other solutions to measure time? Are there other functional islands in the search space of material objects? Of course there are. I will just mention four clear examples: a sundial, an hourglass, a digital clock, an atomic clock.
Emphasis added. Is this "whining"? Is this "ignorance or lack of attention" that is "leading me to underestimate the number of other possible ways of achieving any function"? You judge. Again, I quote from my OP:
Does the existence of the four mentioned alternative solutions, or maybe of other possible similar solutions, make the design inference for the traditional watch less correct? The answer, of course, is no. But why? It’s simple. Let’s say, just for the sake of discussion, that the traditional watch has a functional complexity of 600 bits. There are at least 4 additional solutions. Let’s say that each of them has, again, a functional complexity of 500 bits. How much does that change the probability of getting the watch? The answer is: 2 bits (because we have 4 solutions instead of one). So, now the probability is 598 bits. But, of course, there can be many more solutions. Let’s say 1000. Now the probability would be about 590 bits. Let’s say one million different complex solutions (this is becoming generous, I would say). 580 bits. One billion? 570 bits. Shall I go on? When the search space is really huge, the number of really complex solutions is empirically irrelevant to the design inference. One observed complex solution is more than enough to infer design. Correctly. We could call this argument: “How many needles do you need to tranfsorm a haystack into a needlestack?” And the answer is: really a lot of them. Our poor 4 alternative solutions will not do the trick.
That said, I am really happy that he does not want to "shout over the fence". This is very bad shouting, arrogant evasion, and certainly not acceptable behaviour from someone who is certainly not stupid. Just to be polite, good by to him. gpuccio
Origenes: No, that was not what I meant. NS acts only on the shot that has already found a functional island, because it needs an existing, naturally selectable function to act. Going back to the wall metaphor, it's as is the shots that hit a green brick, and only those that hit a green brick, become in some way "centered" after the hit: so, even if they hit the green brick, say, in a corner, there is a mechanism that moves the bullet to the center. That does not happen to the shots that hit the brown bricks. So, NS is a mechanisms that works in the protein space, but not as a rule in the wall (I am aware of no mechanisms that centers the bullet after the shot). As you said yourself, every analogy has its limitations! One important difference if that the wall model is a random search, while the search in protein space is a random walk. That does not change much in terms of probabilities, but the models are different. So, the model of the ball and holes corresponds better to what happens in protein space. The ball is some sequence, possibly non coding, that changes through neutral variation (it can go in any direction, on the flat plane). As we have said many times, this is the best scenario for finding a new functional island, because already existing functional sequences are already in a hole, and it is extremely difficult for them to move away from it. So, the ball can potentially explore all the search space by neutral variation, but of course it has not the resources to explore all possible trajectories. The movement of the ball is the random walk. We can thing of each new state tested as a discrete movement. Most movements (aminoacid substitutions) make the ball move gradually through the protein space, by small shifts, but some types of variation (indels, frameshifts, and so on) can make it move suddenly to different parts of the space. However, each new state is a new try, that can potentially find a hole, but only according to the probabilites of finding it. If a hole is found, and a naturally selectable function appears, then the ball falls in the hole, and most likely its movement will be confined in the functional island itself, until optimization is reached. The higher the optimization, the more difficult it will be for the ball to go out of the hole and start again a neutral walk. A random search and a random walk are two different kinds of random systems, that have many things in common but are different for some aspects. However, essentially the probabilistic computation is not really different: if a target is extremely improbable in a random search (the shooting) it is also extremely improbable in a random walk (the ball), provided of course that the walk does not start from a position near the target: all that is necessary is that the starting position must be unrelated at sequence level, as is the case, for example, for all the 2000 protein superfamilies. Even in the case where an already functional protein undergoes a sudden functional transition which is in itself complex, like for example in the transition to vertebrates, there is no difference. The fact that the whole protein already had part of the functional information that will be conserved up to humans before the transition does not help to explain the appearence of huge new amounts of specific sequence homology to the human form. Again, the random walk is from an unrelated sequence (the part of the molecule that had no homology with the human form) to a new functional hole (the new functional part of the sequence that appears in vertebrates and has high homology to the human form, and that will be conserved from then on). The important point is that the functional transition must be complex: as I have said many times, there is no difference, probabilistically, if we build a completely new protein which has 500 bits of human conserved functional information, or if we add 500 bits of human conserved functional information to a protein that already exhibited 300 bits of it, and then goes to 800 bits in the transition. In both cases, we are generating 500 new and functional bits of human conserved information that did not exist before, starting from an unrelated sequence, or part of sequence. gpuccio
gpuccio There are a couple of responses in the TSZ ubiquitin thread. I responded to DNA jock briefly but your comments would be greatly appreciated. bill cole
gpuccio
However, the wildtype has an infectivity of about: e^22.4 = 5,348,061,523 which is about 2000 times greater (from 2.6 millions to 5.3 billions). So, they are still far away from the function of the wildtype, and they have already reached stagnation. Moreover, if you look at the sequences at the bottom of the same Figure, you can see that the best result obtained has no homology to the sequence of the wildtype. As the authors say: “More than one such mountain exists in the fitness landscape of the function for the D2 domain in phage infectivity. The sequence selected finally at the 20th generation has ?=?0.52 but showed no homology to the wild-type D2 domain, which was located around the fitness of the global peak. The two sequences would show significant homology around 52% if they were located on the same mountain. Therefore, they seem to have climbed up different mountains.”
Amazing and helpful. Solid evidence of a separate hole that "traps" the protein away form the wild type. bill cole
GPuccio @37
GP: NS has absolutely no role in the process of shooting the functional islands. The ball falls into the hole (however big or deep it is), and rather quickly reaches the bottom. And stays there. This is the role of NS. Once a functional island has been shot (found), NS can begin to act. And it can, at least in some cases, optimize the existing function, usually by a short ladder of one AA steps. Until the bottom is reached (the function is optimized for that specific functional island). So, NS acts in its two characteristic ways, but only after the functional island has been found ...
This is not immediately clear to me. At the moment that NS optimizes a function, can it be argued that NS has some influence on these "optimizing shots"? Assuming that each new configuration is a new shot, perhaps, one can argue that NS indirectly, by fixating the ball in the hole and steering it towards the lowest point, induces more shots to be fired in the area of the hole, rather than somewhere else? IOWs is there a secondary role for NS in relation to the shots fired during the optimization process? As in, NS never fires the first shot, but, instead, induces some 'follow-up-shots'. Origenes
jdk: Thank you for the link. It seems that I did not take part in that discussion. At present I cannot read that long thread, because as you can see I am rather busy. Is there any specific argument that you would like to propose? gpuccio
mike1962: Thank you very much. Your appreciation is much appreciated! :) gpuccio
Origenes at #41: OK, I would say that we agree perfectly. :) gpuccio
bill cole at #38 and 42: The Hayashi paper is about function retrieving. So, it is not about a completely new function. They changed one domain of the g3p protein of the phage, a 424 AAs long protein necessary for infectivity, with a random sequence of 139 AAs. The protein remained barely functional, and that's what allows them to test RV and NS: the function is still there, even if greatly reduced. The phage can still survive and infect. An important point is that fitness is measure here as the natural logarithm of infectivity, therefore those are exponential values. If you look at Fig. 2, you can see that the initial infectivity is about: e^5 = 148 Their best result is about: e^14.8 = 2,676,445 That's why they say that they had an increase in infectivity of about 17000 folds. (the numbers are not precise, I am deriving them from the Figure). However, the wildtype has an infectivity of about: e^22.4 = 5,348,061,523 which is about 2000 times greater (from 2.6 millions to 5.3 billions). So, they are still far away from the function of the wildtype, and they have already reached stagnation. Moreover, if you look at the sequences at the bottom of the same Figure, you can see that the best result obtained has no homology to the sequence of the wildtype. As the authors say: "More than one such mountain exists in the fitness landscape of the function for the D2 domain in phage infectivity. The sequence selected finally at the 20th generation has ?=?0.52 but showed no homology to the wild-type D2 domain, which was located around the fitness of the global peak. The two sequences would show significant homology around 52% if they were located on the same mountain. Therefore, they seem to have climbed up different mountains." gpuccio
gpuccio
Indeed, falling into a bigger hole (a much bigger hole, indeed) is rather a severe obstacle to finding the tiny hole of the wildtype. Finding it is already almost impossible because it is so tiny, but it becomes even more impossible if the ball falls into a big hole, because it will be trapped there by NS. Therefore, to sum up, both the existence of 2000 isolated protein superfamilies and the evidence from the rugged landscape paper demonstrate that functional islands exist, and that they are isolated in the sequence space.
After my re read, I see you have answered my question. bill cole
GPuccio @35, @36
GP: Is that Allan Miller at TSZ?
No, I quoted biochemist Ken Miller from Brown University. He presented this argument succesfully at the Dover trial.
GP: We have one event: the random generation of a 150 figures number. What is the probability of that event? It depends on how you define the probability. In all probability problems, you need a clear definition of what probability you are computing.
You make a very important point. What is falsely suggested, by Ken Miller and others, is that an independent specification is matched.
GP: So, if you define the problem as follows: “What is the probability of having exactly this result? … (and here you must give the exact sequence for which you are computing the probability)”
Exactly right. Ken Miller tell us the exact sequence you refer to when you talk about probability, and do NOT use the outcome to produce this specification.
GP: … then the probability is 10^-150. But you have to define the result by the exact contingent information of the result you have already got. IOWs the outcome informed your specification. IOWs, what you are asking is the probability of a result that is what it is.
The ‘specification’ informed by the outcome matches the outcome. Accurately done Ken Miller, but no cigar.
GP: That probability in one try is 1 (100%). Because all results are what they are. All results have a probability of 10^-150. That property is common to all the 10^150 results. Therefore, the probability of having one generic result whose probability is 10^-150 is 1, because we have 10^150 potential results with that property, and no one that does not have it. So, should we be suprised that we got one specific result, that is what it is?
Kenny Miller acted very surprised, like this:
Miller: We can then look back and say ‘my goodness, how improbable this is. We can play cards for the rest of our lives and we would never ever deal the cards out in this exact same fashion.’ You know what; that’s absolutely correct.
My goodness!
GP: Not at all. That is the only possible result. The probability is 1. No miracle, of course. Not even any special luck. Just necessity (a probabiltiy of one is necessity).
I agree completely. I have attempted to make the exact same point in #33.
GP: A few comments on what you say, and about the word “independent”. Pre-specifications are in a sense “independent” by definition. There is never any problem with them.
I agree. However, unfortunately, obviously, no human can produce pre-specifications of e.g. functional proteins.
GP: The problem arises with post-specifications. You say that they must be “independent”, and I agree. But perhaps the word “independent” can lead to some confusion. So, it’s better to clarify what it means.
In #33 I offered the following clarification:
O: To be clear, here by “independent” is meant independent from the outcome. Such an independent specification can be produced before, during or after the outcome, the only demand which must be met is that it is not informed by the outcome.
GP: But if I see that the protein is a very efficient enzyme for a specific biochemical reaction, and using that observation only, and not the specific sequence of the protein (and I can be completely ignorant of it), I define my function as: “a protein that can implement the following enzymatic reaction” (observed function) at at least the following level (upper tail based on the observed function efficiency)” then my post-specification is completely valid. I am not committing any TSS fallacy. My target is a real target, and my probabilities are real probabilities.
I agree. My only comment is that I still prefer the term “independent specification”. Calling it “post-specification” is confusing and also less accurate. The specification is not based on the outcome and it is therefore irrelevant if it happens before, during or after the outcome. Origenes
It's going to take a few reads to fully digest this, but nicely done. Much appreciation. mike1962
FYI: One time w e had a long discussion about dealing cards and specifications here: https://uncommondescent.com/intelligent-design/darwinism-why-its-failed-predictions-dont-matter/ jdk
gpuccio
So, as I have said, NS has no role in the shooting (in finding the hole): it just helps in the falling of the ball to the bottom of the hole.
Can you relate this to the Hayashi paper which says for the specific application they tested marginal function was easy but the wild type took an enormous library of functions to find. Is the marginal function a "hole" that can work toward the wild type or is it a hole that will work only to slightly better marginal function? bill cole
Origenes: There is another important aspect about shooting the functional islands in the protein space, an aspect that probably has not been sufficiently emphasized in the OP. It's the role of Natural Selection. The important point is: NS has absolutely no role in the process of shooting the functional islands. IOWs, shooting the functional islands is either explained by random shooting (RV) or by aiming (Intelligent Design). NS has no role in that part of the process. Why? Because the equivalent of one random shot, in the case of the protein space, is one single event that generates a different genomic sequence in one individual. IOWs, one RV event. Indeed, my Table about the probabilistic resources of biological systems is based exactly on that idea: how many different sequence configurations can be reached in realistic systems? Each new configuration is a new shot. Now, if we stick to the null hypothesis, and exclude design, each new shot is a random shot, a random event of RV. Unless a functional island is hit by one shot, NS cannot work. So, what is the role of NS in all that? It's easy. Let's go again to our model with balls and holes scattered in a flat plane. The ball moves by RV in the flat plane. Let's assume for the moment that this movement is free and that it finds no obstacles and can go in any direction (IOWs, that the variation is neutral). Let's say that, in the random movement, the ball finds at last one hole (a functional island). What happens? The ball falls into the hole (however big or deep it is), and rather quickly reaches the bottom. And stays there. This is the role of NS. Once a functional island has been shot (found), NS can begin to act. And it can, at least in some cases, optimize the existing function, usually by a short ladder of one AA steps. Until the bottom is reached (the function is optimized for that specific functional island). So, NS acts in its two characteristic ways, but only after the functional island has been found: a) Positive selection expands and fixes each new optimizing variation, quickly reaching the bottom of the hole. This process, as far as we know from the existing examples, is quick and short and rather simple. b) Negative selection, at that point, conserves the optimized result (the ball cannot go out of the hole any more). So, as I have said, NS has no role in the shooting (in finding the hole): it just helps in the falling of the ball to the bottom of the hole. The task of finding the functional islands completely relies on RV (or on design). That's why probabilities are fundamental to distinguish between the two scenarios. gpuccio
Origenes at #33: A few comments on what you say, and about the word "independent". Pre-specifications are in a sense "independent" by definition. There is never any problem with them. Typically, pre-specification can use specific contingent information about the expected result, and not only really independent specifications, like a function. For example, if I say: now my goal is to get from my coin tossing the following sequence of results... and I mention a sequence of 100 binary values, and after that I toss the coin 100 times and get the pre-specified result, then I am getting 100 bits of specific information. Maybe I have some hidden magnet by which I can design the result, or there is some other explanation. The important point is: I am using the specific contingent "coordinates" of each result in the sequence, but it is legitimate, because I am doin that before the coin tossing. IOWs, I am painting targets before shooting. If I shoot them, I am really a Sharp Shooter. The problem arises with post-specifications. You say that they must be "independent", and I agree. But perhaps the word "independent" can lead to some confusion. So, it's better to clarify what it means. I have tried to do that in the OP with the following considerations:
So, in the end of this section, let’s remind once more the truth about post-hoc definitions: No post-hoc definition of the function that “paints” the function using the information from the specific details of what is observed is correct. Those definitions are clear examples of TSS fallacy. On the contrary, any post-hoc definition that simply recognizes a function which is related to an objectively existing property of the system, and makes no special use of the specific details of what is observed to “paint” the function, is perfectly correct. It is not a case of TSS fallacy.
I have added emphasis here to clarify that two separate conditions must be met to completely avoid any form of TSS fallacy: 1) The function is recognized after the random shooting (whatever it is), and certainly its explicit definition, including the definition of the levels observed, depends on what we observe. In this sense, our definition is not "independent" from the results. But the first important requisite is that the function we observe and define must be "related to an objectively existing property of the system". IOWs, the bricks were green before the shooting (we are not considering here the weird proposal about moss made by uncommon_avles at #32). In the case of protein functions, the connection with objectively existing properties of the system is even more clear. Indeed, if bricks could theorically be ppainted after the shooting, biochemical laws are not supposed to come into existence after the proteins themselves. At least, I hope that nobody, even at TSZ, is suggesting that. So, our first requisite is completely satisfied. 2) The second important requisite is that we must "make no use of the specific details of what is observed to “paint” the function". This is a little less intuitive, so I will try to explain it well. For "specific details" I mean here the contingent information in the result: IOWs, the coordinates of the shots in the case of the wall, or the specific AAs in the sequence in the case of proteins. The rule is simple, and universally appliable: if I need to know and use those specific contingent values to explicitly define the function, then, and only then, I am committing the TSS fallacy. Let's see. To paint targets arounf the shots after the shooting, I certainly need to know where the targets are. Only in that way I can paint my target around each shot. So, if I define my function, after the shooting, by saying: my function is to shoot at the following coordinates. x1 .... x2 .... and so on, then I am painting my targets using my post-hoc knowledge of their specific coordinates. Please, note that if I had done the same thing before the shooting, then my specification would have been valid, because it would have been a pre-specification, and pre-specification can use a contingent information, because it has not yet been produced. Let's go to proteins. If I look at the protein and I say: well, my specification is: a 100 AAs protein with the following sequence: ... then I am painting a target, because I am using a sequence that has already come out. That is not correct, and I am committing the TSS fallacy. But if I see that the protein is a very efficient enzyme for a specific biochemical reaction, and using that observation only, and not the specific sequence of the protein (and I can be completely ignorant of it), I define my function as: "a protein that can implement the following enzymatic reaction" (observed function) at at least the following level (upper tail based on the observed function efficiency)" then my post-specification is completely valid. I am not committing any TSS fallacy. My target is a real target, and my probabilities are real probabilites. The same idea is expressed in this other statement from my OP:
The difference is that the existence of the green bricks is not something we “paint”: it is an objective property of the wall. And, even if we do use something that we observe post-hoc (the fact that only the green bricks have been shot) to recognize the function post-hoc, we are not using in any way the information about the specific location of each shot to define the function. The function is defined objectively and independently from the contingent information about the shots.
Again, you can find both the points clearly mentioned here. gpuccio
Origenes at #33: So, Miller too (is that Allan Miller at TSZ?) used the "infamous deck of cards fallacy"? You can find what I think about that at my comment #859 in the Ubiquitin thread, in answer to Allan Keith, who used the same argument, in different form, at TSZ. I just quote here my conclusion: "Therefore, the deck of cards fallacy is not only a fallacy: it is infamous, completely wrong and very, very silly and arrogant. It really makes me angry." The rest can be read by all interested in the quoted comment. The "argument" does not deserve any more attention. gpuccio
uncommon_avles: I think you are probably kidding. My compliments for your imagination, however. In case you are serious, you could wonder if it is scientifically possible to distinguish between a green brick and a brown brick with moss. I suppose one close look could be enough. Or are you suggesting that the biochemical laws that explain protein folding and protein biochemical activities evolved in the course of time, after the sequences were found? gpuccio
Nailing the Texas sharpshooter. The term “independent-specification” should be preferred to “pre-specification”. To be clear, here by “independent” is meant independent from the outcome. Such a independent specification can be produced before, during or after the outcome, the only demand which must be met is that it is not informed by the outcome. Obviously, the outcome of a chance event matching a specification produced by someone before the event equals the probability of matching a specification produced by someone after the event but without knowledge of the outcome. If the specification is informed by the outcome, then, and only then, we have the Texas sharpshooter fallacy. A well-known case:
Miller: One of the mathematical tricks employed by intelligent design involves taking the present day situation and calculating probabilities that the present would have appeared randomly from events in the past. And the best example I can give is to sit down with four friends, shuffle a deck of 52 cards, and deal them out and keep an exact record of the order in which the cards were dealt. We can then look back and say ‘my goodness, how improbable this is. We can play cards for the rest of our lives and we would never ever deal the cards out in this exact same fashion.’ You know what; that’s absolutely correct. Nonetheless, you dealt them out and nonetheless you got the hand that you did.
Miller is ‘surprised’ that the outcome ( “the order in which the cards were dealt”) matches the specification informed by the outcome ( “the exact record of the order in which the cards were dealt” ). But, obviously, there is no warrant for this perplexity at all. Given some accuracy recording the outcome, everyone can perform the following cycle all day long: 1. deal cards. 2. make a “specification” based on the outcome. 3. see that outcome and specification match and express puzzlement. In GPuccio’s wall analogy, we have shots (outcomes) playing a vital role in the discovery of green bricks, and this may be confusing. One may wonder if this involvement means that the shots (the outcomes) inform the specification “green bricks are the target.” It does not mean that. The fact that the shots lead to discovery of green bricks is distinct from hypothesizing that the target is the green bricks. The hypothesis “green bricks are the target” is squarely based on the fact that there are green and brown bricks — note that any alternative hypothesis e.g. “the brown bricks are the target” can also be freely proposed — and not on the outcome. Origenes
You forgot about the time factor. The green colour of the bricks may be due to moss growth. When the bullets hit the random bricks,cracks were formed which facilitated faster growth of moss on those bricks.Thus even though the bullets hit random bricks, ID believes the bullets actually hit the green bricks. uncommon_avles
Origenes: Of course every analogy has its limitations. The important thing is that the analogy must be appropriate for the aim we have using it. The aim of my wall-with-green-bricks analogy, as used in the OP, is simply to show, as clearly as possible, that there is a well defined class of post-hoc specifications to which the TSS fallacy does not apply at all: the class of all post-hoc specification where the specification is not painted arbitrarily, but only observed, recognized and defined, because it is based on some objective property of the observed system. In that sense, both the wall and the protein space are good examples of that. The green bricks are part of the wall even before the shooting, we just ackowledge that they have been shot. In the same way, protein functions are a reality before those proteins come into existence, because biochemical laws determine that some specific AA sequences will be able to do specific things. That connection between sequence and possible function is not painted by us, it is a consequence of definite biochemical laws. So, the wall analogy is perfectly apt to refute the TSS fallacy for the protein space system. Of course, we can refine the analogy to model better what happens in protein space. The main difference is that we know that in biological systems RV exists. IOWs, there is at least one shooter who does not aim, and shoots at random. So, the problem could be better described as follows: Knowing that there is some random shooting, can we still compute the probability of having all (or part) of the green bricks shot? This is a trivial problem of distinguishing between a signal and the associated random noise. The random shooting is the noise. Shooting green bricks is the signal. As in all those problems, which are the rule in science, the answer lies in a probabilistic analysis. And we know very well what probabilites say about finding complex functional islands in the protein space, even considering the existence of random variation in the measure that we know (see my table about the probabilistic resources, many times quoted here). gpuccio
GPuccio @29 Thank you for the explanation. One short comment:
GPuccio: Of course, it is possible that non functional spaces have been shot too. Indeed, RV often shoots non functional sequences. So, our situation is more like a lot of green bricks having been shot, and of course also many brown bricks.
Every analogy has its limitations, but the wall-analogy is pretty good. Perhaps, in this analogy, it is also true that most molested brown bricks are in the proximity of green bricks. Because attempts of promoting total junk-DNA to actual proteins are relatively rare compared to “normal” random mutations. Secondly, because attempts to traverse the vast sea of non-functionality is beaten down by natural selection. If so, then this would bolster your suggestion that green bricks are being the target. Envisioning this scenario I can think of one alternative explanation instead of aiming: the green bricks contain magnets — strongly influencing the course of the bullets. :) Origenes
Origenes: "The ‘green bricks’, I imagine, stand for functional islands in a sea of non-functionality (‘brown bricks’)" Yes. In particular, in my argument based on protein sequences, they stand for the islands of the functional sequences we observe in biological beings. Each of those targets has been "shot" in evolutionary history. Of course, it is possible that non functional spaces have been shot too. Indeed, RV often shoots non functional sequences. So, our situation is more like a lot of green bricks having been shot, and of course also many brown bricks. Negative selection eliminates the non functional results. But the complex functional islands in the search space are so tiny that not even one of them could have been found, in the available evolutionary time. If you look at the Table in my OP about the probabilistic resources of our biological systems on our planet, you can see that the whole bacterial system in the whole life span of earth could never find, with a very generous computation and including 5 sigma improbability, a functional island requiring 37 specific AAs. This is in perfect accord with the results of the rugged landscape experiment, which considers out of range for RV + NS a rather simple result: the retrieval of a partially damaged protein, requiring 35 specific AA substitutions. For comparison, the usual alpha and beta chains of ATP synthase show a conservation between e. coli and humans of 630 AAs, and as we have seen they form an extremely conserved, and practically unique, irreducibly complex structure which is necessary to synthesize ATP. If 35 specific AAs is the theorical edge of our biological planet, how much smaller is a functional island of 630 specific AAs? Our biological shooter must certainly use the Hubble space telescope to find his targets! :) And there is more: the realistic, empirical edge is much less than the theorical edge. We know that in all observed cases, RV can find only extremely simple starting functions: one or two AAs. Something more complex is probably still acceptable. Personally, I would bet that the real edge is, at best, around 5 specific AAs. Certainly, it is much lower than the theoretical, and extremely irrealistic, edge of 35-37 AAs. Almost all existing functional proteins have a functional complexity that is well beyond the top edge of 37 AAs (160 bits). As we have seen, most proteins and domains easily reach above at least 200 bits of functional complexity, and a lot of them exhibit humdreds or even thousands of bits of specific functional complexity. For one protein. And of course, in irreducible complex systems of multi-proteins, and in all regulatory networks, the complexity of the individual components multiplies. Thousands of bits are the rule, for those functions to work properly. The biological shooter, or shooters, is well beyond sharp: and he is really sharp, not like his counterpart in the true fallacy! :) gpuccio
— Texas Sharp Shooter Fallacy —
GPuccio: … we cannot look at the wall before the shooting. No pre-specification. After the shooting, we go to the wall. This time, however, we don’t paint anything. But we observe that the wall is made of bricks, small bricks. Almost all the bricks are brown. But there are a few that are green. Just a few. And they are randomly distributed in the wall.
We observe (and understand) that in biological sequence space we do not have a uniform ‘wall’ where every spot has the same properties. Indeed, surely, not every DNA sequence is functional. In this wall-analogy, if each spot represents a different DNA sequence, then ‘green bricks’ are indeed an exception to the rule.
“… however many ways there may be of being alive, it is certain that there are vastly more ways of being dead, or rather not alive.” — Richard Dawkins, The Blind Watchmaker
Post-hoc or not, it is perfectly clear that not every spot on the wall is creditable with a bullseye.
GPuccio: We also observe that all the 100 shots have hit green bricks. No brown brick has been hit. Then we infer aiming. Of course, the inference is correct. No TSS fallacy here.
The ‘green bricks’, I imagine, stand for functional islands in a sea of non-functionality (‘brown bricks’). And this is exactly what we find in nature — organisms who possess functional sequences (green bricks) in a sea of non-functionality (brown bricks). And if organisms are just clumps of matter, neither interested in functionality nor in being alive, driven by random mutations, then the question “what is the chance that only ‘green bricks’ are hit by random shooting?” is fully justified. Origenes
harry: Yes, they deserve it. Not individually, I think we can always respect persons, and some of them are certainly in good faith (OK, not all of them! :) ) But the ideology itself, that does not deserve great respect at all. And, respect or not, it must be falsified. gpuccio
gpuccio @25, Thanks for your thoughtful response. I still think we take them way too seriously. ;o) They have resorted to believing in the existence of a virtually infinite number of flying spaghetti monster universes -- without any evidentiary basis whatsoever for doing so -- in order to explain away the fine-tuning of this Universe for life. Some universe had to win the fine-tuned-for-life lottery, right? Yeah. Right. And aren't we lucky! Of course, everybody must take the actual existence of all those other universes on faith -- a huge, blind, irrational faith. They deserve to be mocked. harry
harry: I agree with all that you say, of course. Except maybe for the first statement: "I think we take the arguments of the opponents of ID more seriously than they deserve to be taken. By doing so we give them a credibility they do not deserve at all." You know, of course, that the vast majority of scientists in the world do accept neo-darwinism as an explanation for biological realities that is, according to them, practically beyond any doubt. I agree that there is a strong ideology behind that, more or less conscious, and maybe part of it is "defending atheism", or at least the idea that science is the only true repository of truth (scientism) and that it excludes a priori some forms of explanation (reductionism). We also know that not only those that "defend atheism", but also a lot of religious people do accept neo-darwinism as proven truth. The only thing about which I don't agree with you is that by "taking the arguments of the opponents of ID more seriously than they deserve to be taken" we "give them a credibility they do not deserve at all". We take their arguments very seriously because they are wrong, and what is wrong must be shown to be wrong, especially is everybody believes it to be true. And they don't need our attention to be believed. They are already believed. At most, they need us to remain siletn, so that they can go on being believed without any disturbance. Look, I am not interested in their personal beliefs. I don't care if my interlocutors are atheists or religious people. I just think that what is wrong should be recognized as wrong. Especially in science, where some objectivity at least in sharing and evaluating facts should be expected. So, I take their arguments very seriously, as far as they are really arguments, because some of them do not even deserve to be called that way. I take their arguments very seriously to demonstrate that they are wrong arguments. gpuccio
I think we take the arguments of the opponents of ID more seriously than they deserve to be taken. By doing so we give them a credibility they do not deserve at all. If you see a gymnasium floor covered with Scrabble pieces, and they are arranged such that they spell out an interesting mystery novel, you assume that they were arranged by an intelligent agent. But what if nobody knows how the Scrabble pieces got there? What if it simply can't be proven empirically how they came to be arranged that way? You would have to go with what is the most likely explanation. If somebody insisted that boxes and boxes of Scrabble pieces were dumped out on the gym floor and the pieces just happened to land that way, even if you couldn't prove that is not what happened, that scenario is so unlikely that it is simply irrational to assume that that is what happened. One would have to suspect that the advocates of such an explanation had some agenda or another motivating them. Volumes of digital information in the coding regions of DNA are the assembly instructions for intricate cellular machinery the integration of which gives rise to functional complexity beyond anything the best minds of modern science know how to build from scratch. There is no hard evidence whatsoever that indicates that chance combined with the laws of physics can mindlessly and accidentally compose volumes of such information. There is no good reason to assume that is the case. That immense quantity of functional information is the Scrabble piece mystery novel on the gymnasium floor. We don't know how it was composed, but it is simply irrational to assume it happened mindlessly and accidentally. Those who claim it happened that way have an agenda: They are defending atheism instead of engaging in relentlessly objective, true science, which requires that they just admit that the only known source of massive quantities extremely precise, functionally complex, digitally stored information is an intellect. harry
OLV: Thank you! :) I am not exactly a professor, but as a medical doctor I have worked in a university environment, and done some teaching. I still do that in some measure. gpuccio
gpuccio, You have written a whole series of very thorough technical articles that could be compiled together into a serious scientific textbook that should replace the pseudoscientific nonsense used to teach biology in many schools. BTW, you write like a professor. Do you teach at a university? Just curious. Thanks. OLV
Origenes:
A fact that must be truly disheartening for the dedicated Darwinian … an infinite amount of monkeys banging away on typewriters produced a Shakespearian sonnet, but, alas, all is in vain … because it did not fit the Boeing 747 maintenance manual that came along — if you get my drift.
:) :) :) gpuccio
GPuccio @19
GPuccio: To be even more precise, I would say that the search space, is always the same, because it is the search space of all possible sequences, while functional islands (the target space) vary according to the contexts.
Point taken! True story: I attempted to correct my mistake 6 minutes after I posted #18, but, woe is me, I was already too late.
GPuccio: But, again, even the most wonderful enzyme could be completely useless, indeed often deleterious, in many specific cell contexts. So, functional islands can be extremely specific and functional in a general sense, and yet not be naturally selectable in most contexts.
A fact that must be truly disheartening for the dedicated Darwinian … an infinite amount of monkeys banging away on typewriters produced a Shakespearian sonnet, but, alas, all is in vain … because it did not fit the Boeing 747 maintenance manual that came along — if you get my drift.
GPuccio:… the Keefe and Szostak paper. They lower the concept of function to the lowest level possible, a simple and weak biochemical activity.
We have to be very aware of the ‘darwinian’ use of the term ‘function’ and you are definitely correct in your insistence that a function is truly a function only when it is naturally selectable. Thank you for your response. For me, this issue has received enough attention — there is bigger fish to fry. Origenes
Origenes: Yes, very good thoughts. I agree with all that you say. To be even more precise, I would say that the search space is always the same, because it is the search space of all possible sequences, while functional islands (the target space) vary according to the contexts. Let's also remember that we are discussing protein sequences here, not morphological traits. I would say that some functional islands are more "objective": for example, enzymes are certainly amazing in themselves, because what they do at the biochemical leve is amazing. But, again, even the most wondeful enzyme could be completely useless, indeed often deleterious, in many specific cell contexts. So, a functional islands can be extremely specific and functional in a general sense, and yet not be naturally selectable in most contexts. You say:
Correct me if I am wrong, but it seems to me that what they envision is a (non-existent) ‘neutral organism’ that is open to any new function, no matter what it is. Such a ‘neutral organism’ can evolve in any direction while (somehow) retaining its ‘neutrality’. But logic informs us that this can only be an incoherent fantasy.
You are not wrong at all! Not only they imagine some mythic biological context where anything is possible, they also imagine mythic functions that can evovle toward incredible achievements, while all the evidence is against that. When their purpose is to support their silly propaganda, then any possible function becomes a treasure trove of miracles. Natural selection and its constraints is quickly forgotten, and the emphasis is only on some generic drfinition of function that can be cheaply bought, even if with some additional trick in directed evolution. That is the case of the Keefe and Szostak paper. They lower the concept of function to the lowest level possible, a simple and weak biochemical activity. Of coourse, they choose ATP, because that will give some grandeur to their achievement. So they just realize a very simple engineering of ATP binding form random libraries, and everybody is ready to claim that they have demostrated hoe functions evlove by naturally selectable ladders! Propaganda, and nothing else. Of course they find weak ATP binding, as they would have found weak binding to almost all biochemical molecules, if they had searched for it. The simple truth is that weak binding is a completely useless function. But, of course, ATP binding in itself is a completelyuseless function, even if string, as clearly shown by the lack of any biologically useful function in the engineered protein, the one with a strong ATP binding. Because, of course, a protein that just binds ATP is useless. It can only blindly subtract ATP from the cellular environment. Not a good idea. Functional proteins bind ATP because ATP is a repository of biochemical energy. They bind ATP and work as ATPases:
enzymes that catalyze the decomposition of ATP into ADP and a free phosphate ion. This dephosphorylation reaction releases energy, which the enzyme (in most cases) harnesses to drive other chemical reactions that would not otherwise occur.
(From Wikipedia) I remember that some further paper tried to look at some ATPase function in some furtherly directed form of Szostak's protein, and found some minimal form of it. Unfortunately, I don't remember the reference. Again, nothing really useful at the biological level. Because even an ATPase activity is essentially useless, if it does not "harnesses the released energy to drive other chemical reactions that would not otherwise occur". So, we can, by directed evolution, build some protein that can bind ATP and, at a minimal level, convert it again to ADP, withuot any other associated result, aimlessly destroying the hard work made bt ATP synthase. How funny! :) I don't think that NS is interested in any of that. So, let's leave neo-darwinists in their self-made paradise where all can happen. We are interested only in what can happen in reality, and possibly does happen. gpuccio
GPuccio @17
GPuccio: There are really few functional islands that correspond to that in each context.
To state the obvious: each organism has its own tailor-made search space with its own tailor-made functional islands. In other words, there is no such thing as a universal search space with functional islands, which applies to all organisms. Put simply: for each organism goes that only a tiny subset of all biological functions can be integrated — e.g. a turtle has no use for wings and a hummingbird has no use for a (turtle) shell. But this obvious fact is apparently being ignored by Darwinians. Correct me if I am wrong, but it seems to me that what they envision is a (non-existent) ‘neutral organism’ that is open to any new function, no matter what it is. Such a ‘neutral organism’ can evolve in any direction while (somehow) retaining its ‘neutrality’. But logic informs us that this can only be an incoherent fantasy.
GPuccio: In a system that already has some high complexity, like any living cell, the number of functions that can be immediately integrated in what already exists, is certainly strongly constrained.
Another important point; the higher the complexity, the larger the search space and the fewer the functional islands. One could say, that, as complexity increases, the organism commits itself to an ever smaller domain of what is functional and what is not. Origenes
Origenes: Thank you! :) My idea is simple: any functional island in the sequence space (of proteins, if we are discussing in particular protein function) which, if reached by some living organism can give, in that moment and in that context, a reproductive advantage to that specific organism is: a naturally selectable island of function. That simply means that, once RV in the genome of that organism in some way reaches that specific island (with all that is implied, the sequence being transcribed and translated and possibly regulated appropriately, and so on), then NS can act on that new trait: it can expand and fix it in the population (or at least, give a better probability of that), and if it is more or less fixed, negative selection can act on it to defend it from further variation. You say:
I think that chances are slim.
And I absolutely agree. There are really few functional islands that correspond to that in each context. Most of them are extremely simple (very big holes), as it is obvious in the few cases of microevolution that we know of (antibiotic resistance, and so on), where the starting function has a complexity of one or two aminoacids, and the optimization by NS can only add a few AAs. And even those cases need an extreme environmental pressure to really work. On the contrary, almost all the solutions that we observe in the existing proteomes are extremely complex. Another important point is the complexity of the already existing organism. As I have said many times, the more a structure is functionally complex, the less it will tolerate random modifications. Moreover, in a complex system like a living cell, only some very specific solutions, highly engineered just from the beginning, will really be able to confer a reproductive advantage by adding some new and original function. That's why naturally selectable functional islands do exist in a complex context, but they are usually extremely complex themselves. In a sense, it's Berlinski's old argument: if you want to change a cow into a whale, you have to work a lot, and very intelligently. The known cases of microevolution, like antibiotic resistance, are exceptions, but there the new function is essentially a simple degradation of an existing structure, functional itself, that gives advantage because of an extreme constraint (the antibiotic in the environment). It's Behe's idea of burning bridges. In some cases, as many times discussed, some small variation in the active site of an already existing and highly complex enzyme can shift the affinitiy towards a different substrate, and again, if there is a very strong environmental pressure, that shift can be naturally selected. That could be the case for nylonase, as already discussed elsewhere. But the new function is always extremely simple, one or two AAs. Maybe we can occasionally fins some case with three. I don't know. But we are more or less at the edge of what RV can realistically do, there. Most specific solutions in existing proteomes are highly complex. My favourite examples, the alpha and beta chains of ATP synthase, with hundreds of specific AA positions that make an extremely sophisticated and unique 3D functional structure possible, are amazing, but they are no rare exception. Indeed, many other proteins have a conservation bitscore for hundreds of million years that is much greater. Of course, the amazing aspect in the conservation of those two chains is their extremely old origin. gpuccio
tribune7: Thank you! :) I think that there are no "certainties" in science, but there can certainly be very substantial theories. When I observe an effect where the probability of the null hypothesis in < 10^-16 (the lowest p value that R explicitly computes) I am not really worried that it is not a certainty just to be epistemologically consistent. We humans are finite creatures, and we just need "relative" certainties. Which, even if "relative", mean a lot to us. The inference of design for biological objects is such a "certain" thing for me. I have absolutely no doubts about it. But of course, it is still a theory, and not a fact! :) I am not really interested in the Young Earth debate, maybe because here in Italy nobody believes in a young earth (at least, nobody that I am aware of). For me it's rather natural to believe that the age of the earth is what it is considered to be by science, but of course I respect the commitments linked to a personal faith. IMO, however, we should not try to explicitly build ad hoc aguments to defend our religious beliefs. If they are true, they will be defended by truth itself. gpuccio
Dean_from_Ohio: Thank you! :) I am no expert of shotguns, so I am afraid that my search for an appropriate public domain image was not so sharply aimed! :) gpuccio
Thank you for yet another excellent post GPuccio. A very pleasant read and so many interesting points. I would like to comment on one minor issue, which, perhaps, deserves more attention.
GPuccio: Of course, we don’t know exactly how many functional islands exist in the protein space, even restricting the concept of function to what was said above. Neo-darwinists hope that there are a lot of them. I think there are many, but not so many.
What is “functional” is defined only by the organism itself. Something is functional only if it fits the need of an organism. A simple example: one cannot say “a layer of dense underfur and an outer layer of transparent guard hairs is functional.” Sure, it is indeed functional to the polar bear, but it is certainly not functional to most other creatures. The same goes for e.g. “programmed cell death protein 1”, it fits amazingly many organisms, but it is a horror thought for the vast majority of organisms on earth; certainly every prokaryote.
GPuccio: But the problem, again, is drastically redimensioned if we consider that not all functional islands will do. Going back to point 1, we need naturally selectable islands. And what can be naturally selected is much less than what can potentially be functional. A naturally selectable island of function must be able to give a reproductive advantage.
And what this means is that the function must fit. What is the chance that it does? Given that a random walk by a string of junk-DNA finds a “potential function” (and the means to implement it!), the question arises: IS IT FUNCTIONAL FOR THE ORGANISM? I think that chances are slim. I would like a probability number here. Origenes
Great post GP. One thing that strikes me regarding the Texas Sharpshooter is why would some think it would be more reasonable to assume the target was painted after the fact, than it was something at which he aimed. If someone is attempting to use it as an argument against certainty, great, but would they also respect arguments against certainty relating to Darwinism? The TSS as used strikes me as very similar to arguments used by Young Earthers regarding the calibration of radioactive decay used to measure long timespans. It certainly is not without merit and does make an important point namely that dogma and science don't go together, but it is not any kind of rebuttal. At least the Young Earther can take refuge in the safety zone of faith. A Darwinist can't unless he admits his belief is a faith. tribune7
vividbleau: Thank you, Vivid. It was a tiresome task, but it's beautiful to know that it is appreciated! :) gpuccio
Thanks gp’ Incredible effort’. Someone said jokingly that it is too long, but just because something is long does not make it too long. The word that come to mind is “thorough” lots of things to cover and hopefully you will get some good feedback from your interlocutors. Regardless thanks for putting all the time it took to put this together. Vivid . vividbleau
This is pertinent, so I repost it here from the ubiquitin thread: Corneel at TSZ (about my new OP): “Will that be reposted here at TSZ?” I have posted it here. Anyone can post it, or parts of it, at TSZ. There is no copyright, it is public domain. gpuccio
DATCG: Fun it should be. It has been a little tiring to write about all those points, but I hope that will help to avoid distractions and partial arguments. gpuccio
bill cole: Thank you. I am sure you will give a precious contribution! :) gpuccio
Gpuccio, This should be fun, look forward to seeing the discussion :) DATCG
gpuccio Great arguments. I look forward to a lively discussion. bill cole
LocalMinimum: Thank you. :) "or if they’ll just settle into a comfortable orbit about their respective local minima." Maybe that's what they are already doing... gpuccio
Beautifully wrought. I'd even take your name off and replace it with my own, if Google weren't so terribly clever. Now to see which larger holes the politely dissenting interlocutors will land; and to see if they will, from there, travel to the wildtype; or if they'll just settle into a comfortable orbit about their respective local minima. LocalMinimum
Nonlin.org: Well, thank you for the comment. :) I think we agree on the conclusions, even if not necessarily on the details of the procedure. I believe that discussing probabilities is important: in all empirical science a probabilistic analysis is fundamental to distinguish between signal and random noise, for example. The same problem is true for the design inference. Design can in a sense be considered a "law": in the sense that it connects subjective representations and subjective experiences to an outer result. However, design is not a law in the sense of being a predictable regularity. Its cognitive aspect is based on understanding of meanings, including laws, but its intentional part is certainly more unpredictable. Random configurations do exist in reality, and the only way we can describe them is through probabilistic models. Finally, let's say that if it looks desinged, we must certainly seriously consider that is could really be designed. But there are cases of things that look designed and are not designed. Therefore, we need rules to decide in individual cases, and ID theory is about those rules. gpuccio
No doubt you're right, especially about this OP being too long :) Let me try a shortcut: Why are we even discussing probabilities? The smarter Darwinistas do understand that no way we're looking at random phenomena even for something as simple as sand dunes, let alone anything in biology. But they counter that with "necessity" aka "laws of nature". Of course, Dembski’s filter tries to distinguish between Regularity and Design, but this is impossible for what is Design if not Regularity? Look at any simple designed object - its shape is Regular and its code is just a set of Rules of behavior that the Designer creates. Then how do we demonstrate design? One solution is "First Cause". The Designer is He that created the laws of nature. Another is to just observe that "natural selection" fails: http://nonlin.org/natural-selection/ and "evolution" fails: http://nonlin.org/evolution/ Finally, if it looks designed, the default assumption should be that it is designed. Complex machines such as the circulatory system in many organisms cannot be found in the nonliving with one exception: those designed by humans. So-called “convergent evolution”, the design similarity between supposedly unrelated organisms also confirms the ‘common design’ hypothesis. Therefore, the default assumption should be that life is designed: https://en.wikipedia.org/wiki/Biosignature and https://en.wikipedia.org/wiki/Astrobiology . Until someone proves otherwise, just quit the stupidity of: "it looks designed but it is not designed". Nonlin.org

Leave a Reply