Uncommon Descent Serving The Intelligent Design Community

Controlling the waves of dynamic, far from equilibrium states: the NF-kB system of transcription regulation.

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

I have recently commented on another thread:

about a paper that (very correctly) describes cells as dynamic, far from equilibrium systems, rather than as “traditional” machines.

That is true. But, of course, the cell implements the same functions as complex machines do, and much more. My simple point is that, to do that, you need much greater functional complexity than you need to realize a conventional machine.

IOWs, dynamic, far from equilibrium systems that can be as successful as a conventional machine, or more, must certainly be incredibly complex and amazing systems, systems that defy everything else that we already know and that we can conceive. They must not only implement their functional purposes, but they must do that by “harnessing” the constantly changing waves of change, of random noise, of improbability. I have commented on those ideas in the mentioned thread, at posts #5 and #8, and I have quoted at posts #11 and #12 a couple of interesting and pertinent papers, introducing the important concept of robustness: the ability to achieve reliable functional results in spite of random noise and disturbing variation.

In this OP, I would like to present in some detail a very interesting system that shows very well what we can understand, at present, of that kind of amazing systems.

The system I will discuss here is an old friend: it is the NF-kB system of transcription factors (nuclear factor kappa-light-chain-enhancer of activated B cells). We are speaking, therefore, of transcription regulation, a very complex topic that I have already discussed in some depth here:

I will remind here briefly that transcription regulation is the very complex process that allows cells to be completely different using the same genomic information: IOWs, each type of cell “reads” differently the genes in the common genome, and that allows the different types of cell differentiation and the different cell responses in the same cell type.

Transcription regulation relies on many different levels of control, that are summarized in the above quoted OP, but a key role is certainly played by Transcription Factors (TFs), proteins that bind DNA and act as activators or inhibitors of transcription at specific sites.

TFs are a fascinating class of proteins. There are a lot of them (1600 – 2000 in humans, almost 10% of all proteins), and they are usually medium sized proteins, about 500 AA long, containing at least one highly conserved domain, the DNA binding domain (DBD), and other, often less understood, functional components.

I quote again here a recent review about human TFs:

The Human Transcription Factors

The NK-kB system is a system of TFs. I have discussed it in some detail in the discussion following the Ubiquitin thread, but I will describe it in a more systematic way here.

In general, I will refer a lot to this very recent paper about it:

Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

The NF-kB system relies essentially on 5 different TFs (see Fig. 1 A in the paper):

  1. RelA  (551 AAs)
  2. RelB  (579 AAs)
  3. c-Rel  (619 AAs)
  4. p105/p50
    (968 AAs)
  5. p100/p52  (900 AAs)

Those 5 TFs work forming dimers, homodimers or heterodimers, for a total of 15 possible compbinations, all of which have been found to work in the cell, even if some of them are much more common.

Then there are at least 4 inhibitor proteins, collectively called IkBs.

The mechanism is apparently simple enough. The dimers are inhibited by IkBs and therefore they remain in the cytoplasm in inactive form.

When an appropriate signal arrives to the cell and is received by a membrane receptor, the inhibitor (the IkB molecule) is phosphorylated and then ubiquinated and detached from the complex. This is done by a protein complex called IKK. The free dimer can then migrate to the nucleus and localize there, where it can act as a TF, binding DNA.

This is the canonical activation pathway, summarized in Fig. 1. There is also a non canonical activation pathway, that we will not discuss for the moment.


Mechanism of NF-κB action. In this figure, the NF-κB heterodimer consisting of Rel and p50 proteins is used as an example. While in an inactivated state, NF-κB is located in the cytosol complexed with the inhibitory protein IκBα. Through the intermediacy of integral membrane receptors, a variety of extracellular signals can activate the enzyme IκB kinase (IKK). IKK, in turn, phosphorylates the IκBα protein, which results in ubiquitination, dissociation of IκBα from NF-κB, and eventual degradation of IκBα by the proteasome. The activated NF-κB is then translocated into the nucleus where it binds to specific sequences of DNA called response elements (RE). The DNA/NF-κB complex then recruits other proteins such as coactivators and RNA polymerase, which transcribe downstream DNA into mRNA. In turn, mRNA is translated into protein, resulting in a change of cell function.

Attribution: Boghog2 at English Wikipedia [Public domain]

Now, the purpose of this OP is to show, in greater detail, how this mechanism, apparently moderately simple, is indeed extremely complex and dynamic. Let’s see.

The stimuli.

First of all, we must understand what are the stimuli that, arriving to the cell membrane, are capable to activate the NF-kB system. IOWs, what are the signals that work as inputs.

The main concept is: the NF-kB system is a central pathway activated by many stimuli:

  1. Inflammation
  2. Stress
  3. Free
    radicals
  4. Infections
  5. Radiation
  6. Immune
    stimulation

IOWs, a wide variety of aggressive stimuli can activate the system

The extracellular signal arrives to the cell usually through specific cytokines, for example TNF, IL1, or through pathogen associated molecules, like bacterial lipopolysaccharides (LPS). Of course there are different and specific membrane receptors, in particular IL-1R (for IL1) , TNF-R (for TNF), and many TLRs (Toll like receptors, for pathogen associated structures). A special kind of activation is implemented, in B and T lymphocytes, by the immune activation of the specific receptors for antigen epitopes (B cell receptor, BCR, and T cell receptor, TCR).

The process through which the activated receptor can activate the NF-kB dimer is rather complex: it involves, in the canonical pathway, a macromolecular complex called IKK (IkB kinase) complex, comprising two catalytic kinase subunits (IKKa and IKKb) and a regulatory protein (IKKg/NEMO), and involving in multiple and complex ways the ubiquitin system. The non canonical pathway is a variation of that. Finally, a specific protein complex (CBM complex or CBM signalosome) mediates the transmission from the immune BCR or TCR to the canonical pathway. See Fig. 2:

From: NF-κB Activation in Lymphoid Malignancies: Genetics, Signaling, and Targeted Therapy – Scientific Figure on ResearchGate. Available from: https://www.researchgate.net/figure/Increased-activity-of-the-CARMA1-BCL10-MALT1-signalosome-drives-constitutive-NF-kB_fig2_324089636 [accessed 10 Jul, 2019]
Figure 3 – NF-κB Activation in Lymphoid Malignancies: Genetics, Signaling, and Targeted Therapy
available via license: Creative Commons Attribution 4.0 International

I will not go into further details about this part, but those interested can have a look at this very good paper:

TLR-4, IL-1R and TNF-R signaling to NF-kB: variations on a common theme

In particular, Figg. 1, 2, 3.

In the end, as a result of the activation process, the IkB inhibitor is degraded by the ubiquitin system, and the NK-kB dimer is free to migrate to the nucleus.

An important concept is that this is a “rapid-acting” response system, because the dimers are already present, in inactive form, in the cytoplasm, and must not be synthesized de novo: so the system is ready to respond to the activating signal.

The response.

But what is the cellular response?

Again, there are multiple and complex possible responses.

Essentially, this system is a major regulator of innate and adaptive immune responses. As such, it has a central role in the regulation of inflammation, in immunity, in autoimmune processes, and in cancer.

Moreover, the NF-kB system is rather ubiquitous, and is present and active in many different cell types. And, as we have seen, it can be activated by different stimuli, in different ways.

So, the important point is that the response to activation must be (at least):

  1. Lineage-specific
  2. Stimulus-specific

IOWs, different cells must be able to respond differently, and each cell type must respond differently to different stimuli. That gives a wide range of possible gene expression patterns at the transcription level.

The following paper is a good review of the topic:

Selectivity of the NF-κB Response

For example, IL2 is induced by NF-kB activayion in T cells, but not in B cells (lineage specific response). Moreover, specific cell types can undergo specific, and often different, cell destinies after NF-kB activation: for example, NK-kB is strongly involved in the control and regulation of T and B cell development.

From:

30 years of NF-κB: a blossoming of relevance to human pathobiology

“B and T lymphocytes induce NF-κB in adaptive immune responses through the CARD11:Bcl10:MALT1 (CBM) complex (Hayden and Ghosh, 2008). Newly expressed genes promote lymphocyte proliferation and specific immune functions including antibody production by B cells and the generation of cytokines and other anti-pathogen responses by T cells.”

And, in the same cell type, certain promoters regulated by NF-kB require additional signaling (for example,  in human dendritic cells promoters for Il6Il12b, and MCP-1 require additional p38 histone phosphorylation to be activated), while others can be activated directly (stimulus-specific response).

So, to sum up:

  1. A variety of stimuli can activate the system in different ways
  2. The system itself has its complexity (different dimers)
  3. The response can be widely different, according to the cell type where it happens, and to the type of stimuli that have activated the system, and probably according to other complex variables.
  4. The possible responses include a wide range of regulations of inflammation, of the immune system, of cell specifications or modifications, and so on.

How does it work?

So, what do we know about the working of such a system?

I will ignore, for the moment, the many complexities of the activation pathways, both canonical and non canonical, the role of cyotkines and receptors and IKK complexes, the many facets of NEMO and of the involvement of the ubiquitin system.

For simplicity, we will start with the activated system: the IkB inhibitor has been released from the inactive complex in the cytoplasm, and some form of NF-kB dimer is ready to migrate to the nucleus.

Let’s remember that the purpose of this OP is to show that the system works as a dynamic, far from equilibrium system, rather than as a “traditional” machine. And that such a way to work is an even more amazing example of design and functional complexity.

To do that; I will rely mainly on the recent paper quoted at the beginning:

Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

The paper is essentially about the NF-kB Target Selection Puzzle. IOWs, it tries to analyze what we know about the specificity of the response. How are specific patterns of transcription achieved after the activation of the system? What mechanisms allow the selection of the right genes to be transcribed (the targets) to implement the specific patterns according to cell type, context, and type of stimuli?

A “traditional” view of the system as a machine would try to establish rather fixed connections. For example, some type of dimer is connected to specific stimuli, and evokes specific gene patterns. Or some other components modulate the effect of NK-kB, generate diversification and specificity of the response.

Well, those ideas are not completely wrong. In a sense, the system does work also that way. Dimer specificity has a role. Other components have a role. In a sense, but only in a sense, the system works as though it were a traditional machine, and uses some of the mechanisms that we find in the concept of a traditional biological machine.

But that is only a tiny part of the real thing.

The real thing is that the system really works as a dynamic, far from equilibrium system, harnessing huge random/stochastic components to achieve robustness and complexity and flexibility of behavior in spite of all those non finalistic parts.

Let’s see how that happens, at least for the limited understanding we have of it. It is important to consider that this is a system that has been studied a lot, for decades, because of its central role in so many physiological and pathological contexts, and so we know many things. But still, our understanding is very limited, as you will see.

So, let’s go back to the paper. I will try to summarize as simply as possible the main concepts. Anyone who is really interested can refer to the paper itself.

Essentially, the paper analyzes three important and different aspects that contribute to the selection of targets at the genomic level by our TFs (IOWs, our NF-kB dimers, ready to migrate to the nucleus. As the title itself summarizes, they are:

  1. Abundance
  2. Affinity
  3. Binding site availability

1. Abundance

Abundance is referred here to two different variables: abundance of NF-kB Binding Sites in the genome and abundance of Nucleus-Localized NF-kB Dimers. Let’s consider them separately.

1a) Abundance of NF-kB Binding Sites in the genome:

It is well known that TFs bind specific sites in the genome. For NF-kB TFs, the following consensus kB site pattern has been found:

 5′-GGGRNWYYCC-3′

where R, W, Y, and N, respectively denote purine, adenine or thymine, pyrimidine, and any nucleotide.

That simply means that any sequence corresponding to that pattern in the genome can, in principle, bind NF-kB dimers.

So the problem is: how many such sequences do exist in the human genome?

Well, a study based on RelA has evaluated about 10^4 consensus sequences in the whole genome, but as NF-kB dimers seem to bind even incomplete consensus sites, the total number of potential binding sites could be nearer to 10^6

1b) Abundance of Nucleus-Localized NF-kB Dimers:

An estimate of the abundance of dimers in the nucleus after activation of the system is that about 1.5 × 10^5 molecules can be found, but again that is derived from studies about RelA only. Moreover, the number of molecules and type of dimer can probably vary much according to cell type.

So, the crucial variable, that is the ratio between binding sites and available dimers, and which could help undertsand the rate of sites saturation in the nucleus, remains rather undecided, and it seems very likely that it can vary a lot in different circumstances.

But there is another very interesting aspect about the concentration of dimers in the nucleus. According to some studies, NF-kB seems to generate oscillations of its nuclear content in some cell types, and those oscillation can be a way to generate specific transcription patterns:

NF-kB oscillations translate into functionally related patterns of gene expression

For example, this very recent paper :

NF-κB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration

shows at Fig. 3 the occupancy curve of binding sites at nuclear level after NF-kB activation in two different cell types.

In fibroblasts, the curve is a periodic oscillation, with a frequency that varies according to various factors, and translates into different transcription scenarios accordingly:

Gene expression dynamics scale with the period (g1) and amplitude (g2) of these oscillations, which are influenced by variables such as signal strength, duration, and receptor identity.


In macrophages, instead, the curve is rather:

a single, strong nuclear translocation event which persists for as long as the stimulus remains and tends to remain above baseline for an extended period of time.

In this case, the type of transcription will be probably regulated by the are under the curve, ratehr than by the period and amplitude of the oscialltions, as happened in fibroblasts.

Interestingly, while in previous studies it seemed that the concentration of nuclear dimers could be sufficient to saturate most or all binding sites, that has been found not to be the case in more recent studies. Again from the paper about abundance:

in fact, this lack of saturation of the system is necessary to generate stimulus- and cell-type specific gene expression profiles

Moreover, the binding itself seems to be rather short-lived:

Interestingly, it is now thought that most functional NF-kB interactions with chromatin—interactions that lead to a change in transcription—are fleeting… a subsequent study using FRAP in live cells expressing RelA-GFP showed that most RelA-DNA interactions are actually quite dynamic, with half-lives of a few seconds… Indeed, a recent study used single-molecule tracking of individual Halo-tagged RelA molecules in live cells to show that the majority (∼96%) of RelA undergoes short-lived interactions lasting on average ∼0.5 s, while just ∼4% of RelA molecules form more stable complexes with a lifetime of ∼4 s.

2. Affinity

Affinity of dimers for DNA sequences is not a clear cut matter. From the paper:

Biochemical DNA binding studies of a wide variety of 9–12 base-pair sequences have revealed that different NF-kB dimers bind far more sequences than previously thought, with different dimer species exhibiting specific but overlapping affinities for consensus and non-consensus kB site sequences.

IOWs, we have different dimers (15 different types) binding with varying affinity different DNA sequences (starting from the classical consensus sequence, but including also incomplete sequences). Remember that those sequences are rather short (the consensus sequence is 10 nucleotides long), and that there are thousands of such sequences in the genome.

Moreover, different bindings can affect transcription differently. Again, from the paper:

How might different consensus kB sites modulate the activity of the NF-kB dimers? Structure-function studies have shown that binding to different consensus kB sites can alter the conformation of the bound NF-kB dimers, thus dictating dimer function When an NF-kB dimer interacts with a DNA sequence, side chains of the amino  acids located in the DNA-binding domains of dimers contact the bases exposed in the groove of the DNA. For different consensus kB site sequences different bases are exposed in this groove, and NF-kB seems to alter its conformation to maximize interactions with the DNA and maintain high binding affinity. Changes in conformation may in turn impact NF-kB binding to co-regulators of transcription, whether these are activating or inhibitory, to specify the strength and dynamics of the transcriptional response. These findings again highlight how the huge array of kB binding site sequences must play a key role in modulating the transcription of target genes.

Quite a complex scenario, I would say!

But there is more:

Finally, as an additional layer of dimer and sequence-specific regulation, each of the subunits can be phosphorylated at multiple sites with, depending on the site, effects on nearly every step of NF-kB activation.

IOWs, the 15 dimers we have mentioned can be phosphorylated in many different ways, and that changes their binding affinities and their effects on transcription.

This section of the paper ends with a very interesting statement:

Overall, when considering the various ways in which NF-kB dimer abundances and their affinity for DNA can be modulated, it becomes clear that with these multiple cascading effects, small differences in consensus kB site sequences and small a priori differences in interaction affinities can ultimately have a large impact on the transcriptional response to NF-kB pathway activation.

Emphasis mine.

This is interesting, because in some way it seems to suggest that the whole system acts like a chaotic system, at least at some basic level. IOWs, small initial differences, maybe even random noise, can potentially affect deeply the general working of the whole systems.

Unless, of course, there is some higher, powerful level of control.

3. Availability of high affinity kB binding sequences

We have seen that there is a great abundance and variety of binding sequences for NF-kB dimers in the human genome. But, of course, those sequences are not necessarily available. Different cell types will have a different scenario of binding sites availability.

Why?

Because, as we know, the genome and chromatin are a very dynamic system, that can exist in many different states, continuosly changing in different cell types and, in the same cell type, in different conditions..

We know rather well the many levels of control that affect DNA and chromatin state. In brief, they are essentially:

  1. DNA methylation
  2. Histone modifications (methylation, acetylation, etc)
  3. Chromatin modifications
  4. Higher levels of organization, including nuclear localization and TADs (Topologically Associating Domains)

For example, from the paper:

The promoter regions of early response genes have abundant histone acetylation or trimethylation prior to stimulation [e.g., H3K27ac, (67) and H4K20me3, (66)], a chromatin state “poised” for immediate activation…  In contrast, promoters of late genes often have hypo-acetylated histones, requiring conformational changes to the chromatin to become accessible. They are therefore unable to recruit NF-kB for up to several hours after stimulation (68), due to the slow process of chromatin remodeling.

We must remember that each wave of NK-kB activation translates into the modified transcription of a lot of different genes at the genome level. It is therefore extremely important to consider what genes are available (IOWs, their promoters can be reached by the NF-kB signal) in each cell type and cell state.

The paper concludes:

Taken together, chromatin state and chromatin organization strongly influence the selection of DNA binding sites by NF-kB dimers and, most likely, the selection of the target genes that are regulated by these protein-DNA interaction events. Analyses that consider binding events in the context of three-dimensional nuclear organization and chromatin composition will be required to generate a more accurate view of the ways in which NF-kBDNA binding affects gene transcription.

This is the main scenario. But there are other components, that I have not considered in detail for the sake of brevity, for example competition between NF-kB dimers and the complex role and intervention of other co-regulators of transcription.

Does the system work?

But does the system work?

Of course it does. It is a central regulator, as we have said, of many extremely important biological processes, above all immunity. This is the system that decides how immune cells, T and B lymphocytes, have to behave, in terms of cell destiny and cell state. It is of huge relevance in all inflammatory responses, and in our defense against infections. It works, it works very well.

And what happens if it does not work properly?

Of course, like all very complex systems, errors can happen. Those interested can have a look at this recent paper:

30 years of NF-κB: a blossoming of relevance to human pathobiology

First of all, many serious genetic diseases have been linked to mutations in genes involved in the system. You can find a list in Table 1 of the above paper. Among them, for example, some forms of SCID, Severe combined immunodeficiency, one of the most severe genetic diseases of the immune system.

But, of course, a dysfunction of the NF-kB system has a very important role also in autoimmune diseases and in cancer.

Conclusions.

So, let’s try to sum up what we have seen here in the light of the original statement about biological systems that “are not machines”.

The NF-kB system is a perfect example. Even if we still understand very little of how it works, it is rather obvious that it is not a traditional machine.

A traditional machine would work differently. The signal would be transmitted from the membrane to the nucleus in the simplest possible way, without ambiguities and diversions. The Transcription Factor, once activated, would bind, at the level of the genome, very specific sites, each of them corresponding to a definite cascade of specific genes. The result would be clear cut, almost mechanical. Like a watch.

But that’s not the way things happen. There are myriads of variations, of ambiguities, of stochastic components.

The signal arrives to the membrane in multiple ways, very different one from the other: IL1, IL17, TNF, bacterial LPS, and immune activation of the B cell receptor (BCR) or the T cell receptor (TCR) are all possible signals.

The signal is translated to the NF-kB proteins in very different ways: canonical or non canonical activation, involving complex protein structures such as:

The CBM signalosome, intermediate between immune activation of BCR or TCR and canonical activation of the NF-kB. This complex is made of at least three proteins, CARD11, Bcl10 and MALT1.

The IKK complex in canonical activation: this is made of three proteins, IKK alpha, IKK beta, and NEMO. Its purpose is to phosphorylate the IkB, the inhibitor of the dimers, so that it can be ubiquinated and released from the dimer. Then the dimer can relocate to the nucleus.

Non canonical pathway: it involves the following phosphorylation cascade: NIK -> IKK alpha dimer -> Relb – p100 dimer -> Relb – p50 dimer (the final TF). It operates during the development of lymphoid organs and is responsible for the generation of B and T lymphocytes.

Different kinds of activated dimers relocate to the nucleus.

Different dimers, in varying abundance, interact with many different binding sites: complete or incomplete consensus sites, and probably others. The interaction is usually brief, and it can generate an oscillating pattern, or a more stable pattern

Completely different sets of genes are transcribed in different cell types and in different contexts, because of the interaction of NF-kB TFs with their promoters.

Many other factors and systems contribute to the final result.

The chromatin state of the cell at the moment of the NF-kB activation is essential to determine the accessibility of different binding sites, and therefore the final transcription pattern.

All these events and interactions are quick, unstable, far from equilibrium. A lot of possible random noise is involved.

In spite of that amazing complexity and potential stochastic nature of the system, reliable transcripion regulation and results are obtained in most cases. Those results are essential to immune cell differentiation, immune response, both innate and adaptive, inflammation, apoptosis, and many other crucial cellular processes.

So, let’s go back to our initial question.

Is this the working of a machine?

Of course it is! Because the results are purposeful, reasonably robust and reliable, and govern a lot of complex processes with remarkable elegance and efficiency.

But certainly, it is not a traditional machine. It is a lot more complex. It is a lot more beautiful and flexible.

It works with biological realities and not with transistors and switches. And biological realities are, by definition, far from equilibrium states, improbable forms of order that must continuously recreate themselves, fighting against the thermodynamic disorder and the intrinsic random noise that should apparently dominate any such scenario.

It is more similar to a set of extremely clever surfers who succeed in performing elegant and functional figures and motions in spite of the huge contrasting waves.

It is, from all points of view, amazing.

Now, Paley was absolutely right. No traditional machine, like a watch, could ever originate without design.

And if that is true of a watch, with its rather simple and fixed mechanisms, how much truer it must be for a system like NF-kB? Or, for that, like any cellular complex system?

Do you still have any doubts?

Added graphic: The evolutionary history, in terms of human conserved information, of the three proteins in the CBM signalosome.
On the y axis, homologies with the human protein as bits per aminoacid (bpa). On the x axis, approximate time of appearance in million of years.
The graphic shows the big information jump in vertebrates for all three protens , especially CARD11.


Added graphic: two very different proteins and their functional history


Added graphic (for Bill Cole). Functional history of Prp8, collagen, p53.
Comments
OLV: "Now, how much FI could it be associated with the system underlying the antibody maturation that generates about 40 bits of FI? Does this question make sense?" It makes a lot of sense! :) Let's say that there are two different questions, one relatively simple, the other extremely interesting: 1) How can low but non trivial levels of FI arise in the extremely complex and organized protein engineering system which is the immune system? My new OP is meant to propose some detailed analysis of that question. 2) How can a complex and extremely efficient protein engineering system like the immune system arise in a system (the biological system) which has no tools to engineer it? You, like me, certainly know that there is only one reasonable answer to that question.gpuccio
September 9, 2019
September
09
Sep
9
09
2019
02:21 AM
2
02
21
AM
PDT
Bill Cole: I am not following at present these new arguments from Art about how low FI would be in proteins, because very simply I have not the time. I am working hard at other things. Maybe in the future. But I would like to make a very simple observation. How quote here what I said at #570: "However, most of them (I have some problem in not using the easy term “neo-darwinists”, after the fierce declaration by Swamidass that no such thing exists any more. Don’t know well how to name them as a whole: design deniers? :) ), most of “them”, I was saying, seems to have as highest priority to discredit FI in all its forms. This strategy takes usually one of many different ways: a) To deny that FI exists b) To deny that it can be measured in any real system c) To affirm that it exists, but there is not a lot of it in proteins, or in biological objects d) To affirm that it exists, and there is a lot of it in all biological objects, even those that are relatively simple e) To affirm that it exists, and there is a lot of it in non designed, non biological objects All of those ideas are wrong. Of course, for different reasons. But it is interesting to observe how the need to deny something can take so many different, and frankly opposing, pathways." Now, I can understand that different people, in their urge to discredit FI, may take different and contrasting ways. But why should the same person use two completely different strategies, each totally inconsistent with the other? And yet, that seems to be Art's choice. I will be more clear. First of all, I will say that, if I were a design denier, I would definitely stick to strategy c: "To affirm that it exists, but there is not a lot of it in proteins, or in biological objects" All the other choices are simply gross logical errors, and have no value. Option c, instead, while wrong, can be reasonably discussed, because it relies on an issue that is still not clarified, IOWs the functional space of proteins and the sequence - function relationship in that space. While I am certain that what we alredy know is more than enough to make the ID point and falsify design deniers, there is certainly some space for discussion. However, Art has recently defended, certainly with honest passion, two points that, to me, seems to be inconsistent. IOWs, both strategy: e) To affirm that it exists, and there is a lot of it in non designed, non biological objects and strategy: c) To affirm that it exists, but there is not a lot of it in proteins, or in biological objects I am referring, of course, to his statements that: 1) There is huge FI in tornadoes, events that are easily generated in the weather system by reasonably understood necessity laws acting on some random configurations (about 1253 tprnadoes per year only in the US) 2) There is very low FI in complex proteins, which obviously have a lot of it, like prp8. Now, while I obviously believe that both those statements are grossly wrong, I really wonder if am am missing something about the general logic here. IOWs, if FI is so common in non designed, non biological objects, like tornadoes, and examples of extremely high FI are daily generated around us, as Art seems to believe (the tornado argument), then it is obviously useless in the design inference. Then, what is the point in demonstrating that FI is so low in proteins and other biological objects? Is Art defending the strange and rather paradoxical idea that FI is extremely high in non designed non biological objects, but it is definitely extremely low in biology? That is really strange, and it is the exact logical opposite of what ID believes. Just a random coincidence? :)gpuccio
September 9, 2019
September
09
Sep
9
09
2019
01:53 AM
1
01
53
AM
PDT
Bill Cole,
Maybe we will find some common ground at Peaceful Science with your ideas
I doubt it. Good luck. They would have to add “serious” to “peaceful” before you can have a productive discussion there. So instead of PS it would be SPS. :) Maybe such a name change along with the corresponding attitude change would make them more attractive to visits? :) This is today: Alexa global internet traffic ranks 2019-09-09 .............Top % Google ..............1 .......1 UD ..........641,159 .......1 PT .......2,043,165 .......3 TSZ .....3,330,124 .......4 PS .......7,081,297 .......8 Google stats added for comparison only.jawa
September 9, 2019
September
09
Sep
9
09
2019
01:32 AM
1
01
32
AM
PDT
Gpuccio
So, beta lactamase is probably not a conserved protein, but I am not sure what type of conservation we are discussing here. I have not really considered this issue, so I can offer just a few generic ideas. My impression is that some beta lactamases are highly conserved in some bacterial groups, and much less in others. For example, E. coli ampC (P00811) is highly conserved in enterobacteria (700-750 bits), much less in other gammaproteobacteria, like Pseudomonas (about 360 bits). Not much is certain about the evolution of these proteins, so it is rather difficult to intepret these data in terms of FI.
I agree with you that these enzymes are not good examples of long conservation and think that Art did not yet fully comprehend your method. He made the claim today that prp8 had 70 bits of FI. This number is so low given the facts that after he did not understand my attempt to show him how absurdly low this number is I asked that we delay the discussion. We are stuck discussing this because the data we are calculating falsifies the grand claims of evolutionary theory. Evolutionists like Art who have studied and supported this theory for several decades are not quite ready to give it up. That is understandable to me. Like you I don't see any value in prolonging irrational discussions based on trying to protect a theory that is clearly obsolete. I am excited to discuss your new project on the immune system. Maybe we will find some common ground at Peaceful Science with your ideas.bill cole
September 8, 2019
September
09
Sep
8
08
2019
10:20 PM
10
10
20
PM
PDT
GP: Please, allow me to post this short "off topic" announcement: Models of Consciousness Conference at OxfordPaoloV
September 8, 2019
September
09
Sep
8
08
2019
05:33 PM
5
05
33
PM
PDT
GPuccio, This transcription regulation topic is very interesting. I appreciate how much one can learn from reading this discussion, as well as the older OP you wrote on a similar topic, despite it being a little too technical for me sometimes. However, I did not understand very well the discussion you had with people outside this current topic. It will be exciting to read the new article you're currently working on.pw
September 8, 2019
September
09
Sep
8
08
2019
03:36 PM
3
03
36
PM
PDT
GPuccio @590:
I can simply say that they are wrong, and I have tried to explain why as clearly as possible.
Agree. If they don't want to understand, too bad.
Let’s assume for the moment, while waiting for a more details analysis, that antibody maturation can generate about 40 bits of FI. This is not unreasonable. And that it can do that in a few weeks.
Now, how much FI could it be associated with the system underlying the antibody maturation that generates about 40 bits of FI? Does this question make sense?OLV
September 8, 2019
September
09
Sep
8
08
2019
03:19 PM
3
03
19
PM
PDT
GPuccio @ 589: Apparently my message @588 was not clear. The link I provided points to a list of recent papers related to tRNA and aaRS which I posted for UB but wanted to share it with you too. My comment was not about the "teaching" topic discussed in that thread. Here are the links to the papers: https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000274 https://www.mdpi.com/1422-0067/20/1/140 https://www.mdpi.com/2075-1729/9/2/51 https://www.mdpi.com/1422-0067/20/12/2911OLV
September 8, 2019
September
09
Sep
8
08
2019
02:26 PM
2
02
26
PM
PDT
Bill Cole: "This is the point and the reason they are making irrational challenges. Your method is an understandable way to comprehend that evolution as it stands today is a very poor way to explain life’s diversity. A philosophic supporter of evolution has to fight it or give up his philosophy." So true. That's what I meant when I said that the discussion was a confrontation between two very different paradigms, and not a peer review of my procedure. That seems to have made them very angry. I really can't understand why. I was invited there, of course, as a defender of ID theory. Everybody there is well conscious of that. And yet, they desperately try to deny it. What is the sense in that? I say it again: I believe that our theory is right, and that their theory (however they like to name it) is wrong. That's why I spend my time writing things here (and there). There is no other reason. I am open to all resonable arguments and suggestions, but I am not seeking a peer review from those renowned scholars who believe things that, IMHO, are completely wrong. Is that my ego? Is that just being polemic? Maybe. But it's what I believe, and I will not change my mind only because those renowned scholars disagree with me. With us. "Art made a mistake saying Axe’s work refutes your hypothesis. " Of course. Axe's work is good evidence that protein space is not what our kind interlocutors think it is. That is absolutely consistent with what I have always said here. "Beta lactamase is not a conserved protein" OK. But I would like to know what beta lactamase we are considering, and what type of conservation. I paste here from Wikipedia a few words about the "evolution" of beta lactamases, just to give a general summary: "Beta-lactamases are ancient bacterial enzymes. The class B beta-lactamases (the metallo-beta-lactamases) are divided into three subclasses: B1, B2 and B3. Subclasses B1 and B2 are theorized to have evolved about one billion years ago and subclass B3s is theorized to have evolved before the divergence of the Gram-positive and Gram-negative eubacteria about two billion years ago.[23] The other three groups are serine enzymes that show little homology to each other. Structural studies have shown that groups A and D are sister taxa and that group C diverged before A and D.[24] These serine-based enzymes, like the group B betalactamases, are of ancient origin and are theorized to have evolved about two billion years ago.[25] The OXA group (in class D) in particular is theorized to have evolved on chromosomes and moved to plasmids on at least two separate occasions." So, beta lactamase is probably not a conserved protein, but I am not sure what type of conservation we are discussing here. I have not really considered this issue, so I can offer just a few generic ideas. My impression is that some beta lactamases are highly conserved in some bacterial groups, and much less in others. For example, E. coli ampC (P00811) is highly conserved in enterobacteria (700-750 bits), much less in other gammaproteobacteria, like Pseudomonas (about 360 bits). Not much is certain about the evolution of these proteins, so it is rather difficult to intepret these data in terms of FI. Here are some other data about beta lacatamases: https://www.asmscience.org/content/book/10.1128/9781555815639.ch22gpuccio
September 8, 2019
September
09
Sep
8
08
2019
01:00 PM
1
01
00
PM
PDT
Gpuccio
The simple truth is: my procedure is vastly underestimating FI. But it’s fine, because even so a design inference is the only reasonable explanation for most observed proteins in the biological world. I am satisfied with that.
This is the point and the reason they are making irrational challenges. Your method is an understandable way to comprehend that evolution as it stands today is a very poor way to explain life's diversity. A philosophic supporter of evolution has to fight it or give up his philosophy. It is most likely a conservative estimate of FI. When we observe a highly preserved long protein like prp8 there is no reasonable explanation how random change arrived at that position. If the protein was able to tolerate 10 different proteins at every position it would still contain 2335 bits. Art Hunt challenged my math but when I asked him for a counter estimate he changed the subject. Art made a mistake saying Axe's work refutes your hypothesis. Beta lactamase is not a conserved protein so it is acting exactly how your method would predict it would act. Catalytic activity despite AA substitutions at different positions.bill cole
September 8, 2019
September
09
Sep
8
08
2019
07:22 AM
7
07
22
AM
PDT
To all here: While I work at the immune system (a lot of recent and interesting literature about that topic, big work!), I would like to clarify a very simple aspect. As I have said many times, and as I have summarized in some detail at # 577 here (but also in my posts at PS), the simple fact that, say, 10 different functions, each of it having 50 bits of FI, arise in a system does not in any way mean that 500 bits of FI have been generated, This is a point about which there is no possible negotiation. If people at PS or elsewhere want to think differenty, it's their choice. I can simply say that they are wron, and I have tried to explain why as cleraly as possible. I cannot do more than that. With those who disagree about that point, no further discussion about FI is possible. Of course. So, while I am trying to analyze as precisely as possible how many bits of FI are usually generated by the highly complex protein engineering lab that is the immune system, I would like to clarify a simple point. Let's assume for the moment, while waiting for a more details analysis, that antibody maruration can generate about 40 bits of FI. This is not unreasonable. And that it can do that in a few weeks. Of course, that happens many times in any individual, in the course of life. So, let's say that 20 antibodies are optimized by the immune lab in a reasonable window of time. In no way that means that 800 bits of FI have been generated. I just means that a system that can generate 40 bits in a few weeks can, obviously, do the same thing many different times. But that is not 800 bits of FI. That's all.gpuccio
September 8, 2019
September
09
Sep
8
08
2019
04:58 AM
4
04
58
AM
PDT
OLV: I am not specially interested in teaching ID, but of course any dogmatic restraint of free thought and free learning is intrinsically sad. I believe that most philosophical works shoud be banned from school, according to that attitude. Certainly, many philosophers stated wrong things. But if we don't know what they said, and we are not free to discuss it, nobody can really decide for onself who is right and who is wrong. The final free cognitive choice for all humans.gpuccio
September 8, 2019
September
09
Sep
8
08
2019
04:40 AM
4
04
40
AM
PDT
GPuccio, Off topic What do you think of this? https://uncommondescent.com/education/demand-for-a-ban-on-teaching-creationism-in-welsh-schools/#comment-683607 ThanksOLV
September 8, 2019
September
09
Sep
8
08
2019
01:22 AM
1
01
22
AM
PDT
This is a little confusing: Professor Art Hunt had a very successful “tornado” argument presented against GP’s teaching mission @ PS Apparently Dr Hunt’s brilliant “tornado” argument punched a hole in GP’s explanation, but apparently that brilliant contribution by Dr Art Hunt didn’t make a major difference in helping to improve the ranking of the websites where Dr Hunt and his party comrades post their clever comments. Why? Did I miss something in the picture? :) Alexa global internet traffic ranks and % 2019-09-08 ......Rank.......Top % Google ................1 .......1 UD ..........641,587 .......1 PT .......2,044,027 .......3 TSZ .....3,333,476 .......4 PS .......7,093,074 .......8 Apparently there are over 100 million active websites out of over 1.5 billion registered websites (most inactive) A few days ago PT was in the top 2% but now it dropped to the 3%? Apparently earlier this year PT was just a few 100K websites behind UD, both in the top 1%. But somehow PT has kept dropping. This is strange considering that their opinions seem to be more in synch with the popular trends lately? Is this right? Maybe I got this wrong ? :) Does that mean that both Google (ranked 1) and UD are in the top 1% of the active websites? :)jawa
September 8, 2019
September
09
Sep
8
08
2019
12:27 AM
12
12
27
AM
PDT
GPuccio @585:
When we are not discussing in any way all the regulatory FI that is absolutely necessary for even the simplest and isolated functional protein to work.
...to work and even to get synthesized to begin with, as you explained so well a year ago in your own OP on transcription that you cleverly cited at the beginning of this OP. It could be only me, but I have to admit that I still don’t understand why it’s so difficult for those intelligent highly educated people at PS to understand the clear cases that GPuccio explained. Am I missing something in the picture? Regarding the next OP on the fascinating “immune system” topic, it seems like with every new OP you set the bar higher for yourself. It would take quite a creative effort to write something as interesting as the analogy with the surfers in this OP. Let’s wait and see. :)OLV
September 7, 2019
September
09
Sep
7
07
2019
03:12 PM
3
03
12
PM
PDT
Bill Cole: It's difficult to discuss with those who don't want to listen. I have said many times that what I measure is not the exact number of functional sequences. It is a good estimator. People seem to forget very easily, on that side, that we are discussing logarithmic/exponential scales here. Now, we have used as a rule a threshold of 500 bits to infer design. That's something! With 240 bits as a higher threshold of the probabilistic resources of our planet (a very generous higher threshold). Now they argue that my estimator of FI could overestimate the true value. When I am using the Blast algorithm which, for identical aminoacid sequences, gives at most 2,2 bits per aminoacid site, when the full information value of one specific aminoacid site is of course 4.3 bits. Try to see how much is the difference, in a logarithmic scale. When, as explained, my method can only measure the conserved FI, and completely ignores the FI which is species or class specific, and that certainly is a relevant part of the whole FI in a protein. When we are not discussing in any way all the regulatory FI that is absolutely necessary for even the simplest and isolated functional protein to work. And OK, there is some space for optimization. Maybe the alpha and beta chains of bacterial ATP synthase started simpler, and then were oprimized by some NS. Maybe. Maybe not. For what we know, we observe the highly specific sequences that are almost the same in bacteria as in humans. But let's say that those sequences started different. Simpler, less functional. We have about 600 conserved aminoacids in the two chains. Which are only part of a bigger complex, in general much less conserved. So, hoe simple do these people believe that the initial protein subunit was? 10 specific AAs? 20? 30? 30 specific AAs are already almost at the very generous edge of probabilistic resources. And all optimizations driven by NS we know of are a few aminoacids at most. Maybe 5 or 10 in the very efficient protein engineering lab that is the immune system (which, of course, I am analyzing in detail at present), and which of course is not an example of RV + NS, but rather of how proteins can be designed by complex systems. But I don't want to anticipate. So, maybe there is some room for optimization. That would mean a few similar sequences with lower, or slightly different, function. And so? And maybe there are a few completely different complex solutions, which increase the target space for the general function os, what? 10 times? 100 times? 100 times is less than 7 bits. Wow! These people seem not to realize that, when I compute, by my procedure and using the Blast algorithm, a FI of 663 bits to the beta chain of ATP synthase blasting the human form against E. coli, with 335 identities and 383 positives in a 529 AA long sequence, I am proposing that the target space is 2^1653. Indeed, the potential information value for such a sequence is 2^2286, therefore 663 bits of FI correspond to an extremely big target space: 2^1653 sequences that can be functional in that context. Oh, but our interlocutors are worried that I may have missed a couple of similar sequences, or a few AAs of optimization! The simple truth is: my procedure is vastly underestimating FI. But it's fine, because even so a design inference is the only reasonable explanation for most observed proteins in the biological world. I am satisfied with that.gpuccio
September 7, 2019
September
09
Sep
7
07
2019
02:15 PM
2
02
15
PM
PDT
Gpuccio Re interesting exchange at PS. rt Arthur Hunt Plant Biologist 23h colewd:
Curiously enough, Axe’s work refutes @colewd. Axe, if you may recall, crafted a beta-lactamase that was quite unlike any that is seen in databases. His work shows that “preservation” is not a good way to estimate the numbers of functional sequences, and thus FI.
colewd Bill Cole Uncommon Descent 12m Art: Curiously enough, Axe’s work refutes @colewd. Axe, if you may recall, crafted a beta-lactamase that was quite unlike any that is seen in databases. His work shows that “preservation” is not a good way to estimate the numbers of functional sequences, and thus FI.
Beta lactamase is not highly preserved.
bill cole
September 7, 2019
September
09
Sep
7
07
2019
09:37 AM
9
09
37
AM
PDT
GPuccio, That’s exciting news. Thanks. I look forward to reading your next OP.OLV
September 7, 2019
September
09
Sep
7
07
2019
09:20 AM
9
09
20
AM
PDT
To all here: I am definitely working at a new OP about the immune system from the point of view of FI. Fascinating topic! :) It will include answers about catalytic antibodies, affinity maturation, the formation of the basic repertoire, and so on. I don't know how much time it will take, but I am working at it.gpuccio
September 7, 2019
September
09
Sep
7
07
2019
01:20 AM
1
01
20
AM
PDT
Groovy :) Looks like some interesting new post to read Gpuccio. Good to see you! Hopefully will read this soon. Thanks!DATCG
September 3, 2019
September
09
Sep
3
03
2019
07:48 AM
7
07
48
AM
PDT
What does GAE mean?
Genealogical Adam and Eve.bill cole
September 2, 2019
September
09
Sep
2
02
2019
07:07 AM
7
07
07
AM
PDT
GP @570:
I am not aware of any direct answer to that, or to the final direct questions I offered there.
Well, maybe their response was to shutdown the whole discussion and hide their embarrassing failure away from potential viewers. :)jawa
September 2, 2019
September
09
Sep
2
02
2019
06:58 AM
6
06
58
AM
PDT
PeterA: "Is it the Texas… something?" The Texas something, yeah! :)gpuccio
September 2, 2019
September
09
Sep
2
02
2019
06:52 AM
6
06
52
AM
PDT
To all here: Wrong examples of FI computation: This is a more important point, and I see that most people at PS are convinced of this wrong approach. And they are convinced that their wrong way of conceiving and computing FI is a confutation of ID theory. Well, they are almost right about that: the truth is that their wrong ways of conceiving and computing FI is a confutation of thir wrong ideas of what ID theory is. They are doing all by themselves: wrong understanding of tyhe theory, confutation of their wrong understanding. I have tried many times to explain that FI is simply the computation of the minimal number of bits necessary to implement some well defined function. IOWs, it is a property of the function. What is a function? A well defined description of something that can be done using the object. Now, there are simple functions and complex functions. Simple functions have only a few bits of FI, complex functions have high values of FI. As I have discussed in my first OP about FI, linked many times, a stone form a beach can implement the function of paperweight: maybe not the most elegant, but perfectly efficient is the form and weight of the stone are good enough. The target space her is big. We can find a lot of such stones on the beach. We usually do not find a watch on a beach, unless someone lost it. This is the very simple concept of FI: a stone has low FI, and the functions ot can implement are simple. A watch has high FI, if we define it for the function of measuring time accurately. Well, what do out friend seem to believe? They say that FI can be summed for simple functions, to get high values of FI. Because FI is multiplicative. Is that true? Of course no. I have given the example of the safes just to show that. But I will give here another one, even simpler. Let's go back to the beach. Let's say that a stone of appropriate weight and form has about 1 bit of FI: IOWs, in average, one stone out of two is good. It's like the small safes in my other example, where one solution out of two was correct. So, a boy who wants to sell simple paperweights to tourists go to the beach, and starts looking at the stones to choose those that are good for his trade. After ten minutes, he gathers 100 of them, and he goes home, happy. According to the reasonings of our "friends", the result of the search is an example of 100 bits of FI. Of course, that's completely wrong. It's simply a sum of 100 results of very low FI, in a low number of attempts. The FI is low, the probability is low, the required number of attempts is low, the wait time is low. Consistently. Why? Because FI is a measure of the improbability of getting to a result by chance, because of its specific comlexity. There is nothing complex in finding 100 examples of 1 bit stones in a beach. I use the binomial distribution to analyze that type of context.If each correct stone is a success, the boy has a probability of 0.5281742 to find 100 good stones in 200 attempts. An object exhibiting 100 bits of FI, instead, is simply an object so specific in its form and weight, for example, that we have only a probability of 1:2^100, IOWs a probability of 7.888609e-31 in one attempt. The probability of finding 100 such objects in 200 attempts is so low that my software (R) just gives me zero as a result. The reason for that abysmal difference? 100 instances of objects with 1 bit of FI are not an instance of an object with 100 bits of FI. So, how can we avoid that error? It's simple, We mjust simply use the concept of FI as it is, and not try to change it. FI is the specific improbability to find an object with the defined function in a random system, as a result of the random configurations that arise in the system. A few important rules to use FI properly: a) The configurations must not be determined by necessity laws: they must be random configurations really accessible to the system as a result of the laws acting in it. Possibly with comparable probabilities. We will see how important is this rule in the case of tornadoes. b) FI is computed for one object implementing a well defined function, or for a system of objects which are all necessary to implement the defined function. c) Functions must be real functions, IOWs, something independently defined that can be done using the object. d) The function for which we compute FI must be considered as not deconstructable (not decomposable) into simpler functions. IOWs, no such deconstruction must be known or available. Note: as we are discussing empirical science, it is not requested that we have a mathematical or logical demostration that no deconstruction is possible. The empirical absence of known deconstruction will do, of course with possible falsification, consistently with Popper's rule. Let's try to understand how important is this point. Let's go back to the 100 small safes and the one big safe. The 100 small safes are an example of 100 simple results. No high FI there. The 1 big safe is an example of 100 bits of FI in one object. But, of course, Swamidass has tried to confound even this simple context. And I have given some thoughts to his arguments, finding them wrong. I will try to give a few clarifications, to avoid confusion about this point. So, what if we define the function as "having 100 simple functions found" in some random system? No problem with that. It is not really a function, unless you can demonstrate that those 100 simple functions generate a higher meta-function. So, I would define that not a function, but rather the description of a state. However, it is a very likely result, if the rpobablisitc resoruces are adequate. For example, we have seen that about 200 attemps are needed to have something more than 0.5 probability of finding 100 one bit solutions. Even, if it is not a function, we can just the same compute the probability of that state. So, what about the case where those 100 simple functions do generate a new meta-function? Please, follow me with attention here. Let's say that. for some strange reason, the ordered bits that are the right solutions to open the 100 small safes (which of course are ordered too) are exactly the solution to open the big safe. So, the thief opens the 100 safes, writes the sequence of the bits, then goes to the big safe, opens it, and goes home doubly satisfied! Has he found a 100 bit function? Apparently, yes. He has opened the big safe. How did he do that? In such a short time? The simple asnwer is: because the 100 bit function was deconstructable into 100 simple functions, each of them of 1 bit, and each of them contributing to the final function. And, and this is the really important part, each of them selectable because of its simple function. Let's understand this point of the selection. Let's say that the thief tries to open the 100 small safes by trying one sequence of 100 bits at each attempt. After each attempt, a numaber of small safes will open. But the thief cannot see that result, or racognize it in any way. He cannot know which bits were "correct". He can have feedback from the system only if he finds the right sequence of bits. then, both the 100 small safes and the one big safe will open, and will be shown to him. Otherwise, the small safes close again, and he will have to try a new sequence. This particular case corresponds to a context where solutions for the simple functions can be easily found, but they are not in any way "selected" and fixed. It can be the case for some simple functions that can be easily found, but give no fitness advantage to the cell. They are not fixed, they undergo no form of recognition and selection, and they will be probably lost in the next few attempts. This is a situation where there are 100 bits of FI. The functioncannot appear and be recognized and selected unless the whole sequence of 100 bits is provided. And there is nothing in the system that can help find it. IOWs, the system can only use its probabilistic resources. If the probabilistic resoruces are mcuh lower than the FI, the result will never be found. So, we come to a statement that Ihave always given here at UD, every time I have debated NS. Some here do not like it too much, but it is perfectly true: Any form of selection, IOWs any form of process that can recognize a configuration which is a step towards a higher meta-function, and fix it, lowers the probabilistic barriers for that final result. IOWs, it lowers the FI of the final function. IOWs, deconstructable functions have lower FI than similar functions which are not deconstructabel. Sometimes much lower. My friends, don't worry about that: the simple truth is that complex functions cannot, as a rule, I would say almost never, be deconstructed in that way. That's why NS has only a minor role in the game. As discussed by me elsewhere. And, of course, by Behe in his wonderful books. (It seems that all my statements about Behe are annoying for the PS people. But I just say what I think) So, in theory, if we can deconstruct into 1 bit steps a compelx function, however complex I would say, and we can in some strange way recognize and fix each individual bit, we can easily find the final function. That's more or less what Dawkins did with the weasel. But there is an even easier and direct way to do it. Just use the final result, the correct phrase, as an oracle, and try a random letter for each position. Then keep the right ones, and try again with the rest. In a very limited number of attemps, you will get the right phrase in all its beauty. Dawkings used a more complex strategy, always using the text that he had to fins as an oracle, but making things a litlle more difficult, just to give the appearance of credibility. Of course, the simplest way, if you have the text, is to copy it directly. But then you are not using randmness at all, and there is no fun. So, we want to find the exact sequence of the beta chain in human ATP synthase? What's the problem? We can easily demostrate that GP is a liar in his silly reasonings about FI. We can just measure, after each random attempt, the sequence similarity of our random sequence to our oracle (of course, the right sequence). We keep only the sequences where the similarity grows. If it goes, down, we go back to the previous state. I am sure that we can find the solution in very reasonable times. After all, that's how antibody maturation is obtained, nore or less. But I am anticipating things. The point is: any form of selection lowers the probabilistic resources. How much? Let's say that our 100 bit solution can be deconstructed into 70 steps. For each of them the right solution, of found, can be selected and fixed, because a smaller safe opens. Each ordered solution os an ordered part of the final meta-solution (the key to the big safe). OK, the deconstruction is as follows: 35 small safes, each with a 1 bit key. 1 intermediate safe with a 30 bit key 35 small safes, each with a 1 bit key How do we compute the FI of the big safe, so deconstructed? It's easy. It is essentially the FI of the single 30 bit safe, plus a trivial contribution from the other ones. The thief will open quickly the first 35 safe, and the last 35 safes. But the problem is the 30 bit safe. That is not easy at all. He needs approximately 1 billion attempts to have a good probability of success. At 1 minute for attempt, that's about 1900 years! So, we can correctly say that the FI of the 100 bit function, so deconstructed, is about 30 bits, with a very trivial contribution from the other bits. So the point is: if our theory is that a complex function can be reasonably accessible to our random system, we have to demonstrate that there exists one deconstruction of the functioninot simpler steps, each of them contributing to the fuinal meta-function, each of them selectable. With a form of selection, fixation or expansion which is available in the system. OK, that's enough for the moment.gpuccio
September 2, 2019
September
09
Sep
2
02
2019
06:49 AM
6
06
49
AM
PDT
Bill Cole,
His main objective is to keep both sides talking with the exception of areas that support his work on the GAE.
Why would he want to keep both sides talking? To increase visits to his website or to reach a compromised settlement on the discussed topic? If it’s the former, then he better does it right, because his website seems sinking in the Alexa ranking (close to 4M positions down in the last three months). What does GAE mean?jawa
September 2, 2019
September
09
Sep
2
02
2019
06:39 AM
6
06
39
AM
PDT
PeterA, Here’s what GP was apparently referring to: https://en.m.wikipedia.org/wiki/Texas_sharpshooter_fallacyjawa
September 2, 2019
September
09
Sep
2
02
2019
06:36 AM
6
06
36
AM
PDT
Now, I am not really convinced that he really believes the things he has stated about this point.
Gpuccio I think you are right here. Many times Josh will argue contrary positions just to stimulate conversations. He often uses logical fallacies that are obvious and I have a hard time believing he does not realize he is doing this. It is very hard to argue against Behe's carefully laid out positions without invoking logical fallacies. His main objective is to keep both sides talking with the exception of areas that support his work on the GAE.bill cole
September 2, 2019
September
09
Sep
2
02
2019
06:34 AM
6
06
34
AM
PDT
GP @570: Excellent explanation of the mistaken FI examples raised by some folks at PS. Thanks.
Asking: “I want exactly this configuration” is meaningless, from the point of view of FI. That is drawing the target after having made the shot, and drawing it where no target at all existed before (does that remind you of something?
Is it the Texas... something? :) PS.
However, her I want to deal briefly with option e), Of course, just after I had cautioned him that, as any reasoning person would understand without any need of being cautioned, using the observed bits to build an ad hoc function is the words logical fallacy one can imagine
here? worst? :)PeterA
September 2, 2019
September
09
Sep
2
02
2019
06:15 AM
6
06
15
AM
PDT
Hi Olv
Also, Bill Cole -who did a nice work coordinating your teaching mission at PS- just mentioned a possible joint work with professor Behe? Did I get that right?
I would not rule out this possibility especially if we get more traction with GP's ideas within the scientific community. The take away for me is that both concepts, irreducibly complexity and showing high amounts to information gain are compatible as irreducibly complexity shows highlights where extensive random search is necessary. If we see 500 bits we can safely infer design.bill cole
September 2, 2019
September
09
Sep
2
02
2019
05:38 AM
5
05
38
AM
PDT
OLV: "Also, Bill Cole -who did a nice work coordinating your teaching mission at PS- just mentioned a possible joint work with professor Behe? Did I get that right?" Well, that's probably Bill Cole's desire! :)gpuccio
September 2, 2019
September
09
Sep
2
02
2019
05:27 AM
5
05
27
AM
PDT
1 4 5 6 7 8 25

Leave a Reply