Uncommon Descent Serving The Intelligent Design Community

Controlling the waves of dynamic, far from equilibrium states: the NF-kB system of transcription regulation.

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I have recently commented on another thread:

about a paper that (very correctly) describes cells as dynamic, far from equilibrium systems, rather than as “traditional” machines.

That is true. But, of course, the cell implements the same functions as complex machines do, and much more. My simple point is that, to do that, you need much greater functional complexity than you need to realize a conventional machine.

IOWs, dynamic, far from equilibrium systems that can be as successful as a conventional machine, or more, must certainly be incredibly complex and amazing systems, systems that defy everything else that we already know and that we can conceive. They must not only implement their functional purposes, but they must do that by “harnessing” the constantly changing waves of change, of random noise, of improbability. I have commented on those ideas in the mentioned thread, at posts #5 and #8, and I have quoted at posts #11 and #12 a couple of interesting and pertinent papers, introducing the important concept of robustness: the ability to achieve reliable functional results in spite of random noise and disturbing variation.

In this OP, I would like to present in some detail a very interesting system that shows very well what we can understand, at present, of that kind of amazing systems.

The system I will discuss here is an old friend: it is the NF-kB system of transcription factors (nuclear factor kappa-light-chain-enhancer of activated B cells). We are speaking, therefore, of transcription regulation, a very complex topic that I have already discussed in some depth here:

I will remind here briefly that transcription regulation is the very complex process that allows cells to be completely different using the same genomic information: IOWs, each type of cell “reads” differently the genes in the common genome, and that allows the different types of cell differentiation and the different cell responses in the same cell type.

Transcription regulation relies on many different levels of control, that are summarized in the above quoted OP, but a key role is certainly played by Transcription Factors (TFs), proteins that bind DNA and act as activators or inhibitors of transcription at specific sites.

TFs are a fascinating class of proteins. There are a lot of them (1600 – 2000 in humans, almost 10% of all proteins), and they are usually medium sized proteins, about 500 AA long, containing at least one highly conserved domain, the DNA binding domain (DBD), and other, often less understood, functional components.

I quote again here a recent review about human TFs:

The Human Transcription Factors

The NK-kB system is a system of TFs. I have discussed it in some detail in the discussion following the Ubiquitin thread, but I will describe it in a more systematic way here.

In general, I will refer a lot to this very recent paper about it:

Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

The NF-kB system relies essentially on 5 different TFs (see Fig. 1 A in the paper):

  1. RelA  (551 AAs)
  2. RelB  (579 AAs)
  3. c-Rel  (619 AAs)
  4. p105/p50 (968 AAs)
  5. p100/p52  (900 AAs)

Those 5 TFs work forming dimers, homodimers or heterodimers, for a total of 15 possible compbinations, all of which have been found to work in the cell, even if some of them are much more common.

Then there are at least 4 inhibitor proteins, collectively called IkBs.

The mechanism is apparently simple enough. The dimers are inhibited by IkBs and therefore they remain in the cytoplasm in inactive form.

When an appropriate signal arrives to the cell and is received by a membrane receptor, the inhibitor (the IkB molecule) is phosphorylated and then ubiquinated and detached from the complex. This is done by a protein complex called IKK. The free dimer can then migrate to the nucleus and localize there, where it can act as a TF, binding DNA.

This is the canonical activation pathway, summarized in Fig. 1. There is also a non canonical activation pathway, that we will not discuss for the moment.


Mechanism of NF-κB action. In this figure, the NF-κB heterodimer consisting of Rel and p50 proteins is used as an example. While in an inactivated state, NF-κB is located in the cytosol complexed with the inhibitory protein IκBα. Through the intermediacy of integral membrane receptors, a variety of extracellular signals can activate the enzyme IκB kinase (IKK). IKK, in turn, phosphorylates the IκBα protein, which results in ubiquitination, dissociation of IκBα from NF-κB, and eventual degradation of IκBα by the proteasome. The activated NF-κB is then translocated into the nucleus where it binds to specific sequences of DNA called response elements (RE). The DNA/NF-κB complex then recruits other proteins such as coactivators and RNA polymerase, which transcribe downstream DNA into mRNA. In turn, mRNA is translated into protein, resulting in a change of cell function.

Attribution: Boghog2 at English Wikipedia [Public domain]

Now, the purpose of this OP is to show, in greater detail, how this mechanism, apparently moderately simple, is indeed extremely complex and dynamic. Let’s see.

The stimuli.

First of all, we must understand what are the stimuli that, arriving to the cell membrane, are capable to activate the NF-kB system. IOWs, what are the signals that work as inputs.

The main concept is: the NF-kB system is a central pathway activated by many stimuli:

  1. Inflammation
  2. Stress
  3. Free radicals
  4. Infections
  5. Radiation
  6. Immune stimulation

IOWs, a wide variety of aggressive stimuli can activate the system

The extracellular signal arrives to the cell usually through specific cytokines, for example TNF, IL1, or through pathogen associated molecules, like bacterial lipopolysaccharides (LPS). Of course there are different and specific membrane receptors, in particular IL-1R (for IL1) , TNF-R (for TNF), and many TLRs (Toll like receptors, for pathogen associated structures). A special kind of activation is implemented, in B and T lymphocytes, by the immune activation of the specific receptors for antigen epitopes (B cell receptor, BCR, and T cell receptor, TCR).

The process through which the activated receptor can activate the NF-kB dimer is rather complex: it involves, in the canonical pathway, a macromolecular complex called IKK (IkB kinase) complex, comprising two catalytic kinase subunits (IKKa and IKKb) and a regulatory protein (IKKg/NEMO), and involving in multiple and complex ways the ubiquitin system. The non canonical pathway is a variation of that. Finally, a specific protein complex (CBM complex or CBM signalosome) mediates the transmission from the immune BCR or TCR to the canonical pathway. See Fig. 2:

From: NF-κB Activation in Lymphoid Malignancies: Genetics, Signaling, and Targeted Therapy – Scientific Figure on ResearchGate. Available from: https://www.researchgate.net/figure/Increased-activity-of-the-CARMA1-BCL10-MALT1-signalosome-drives-constitutive-NF-kB_fig2_324089636 [accessed 10 Jul, 2019]
Figure 3 – NF-κB Activation in Lymphoid Malignancies: Genetics, Signaling, and Targeted Therapy
available via license: Creative Commons Attribution 4.0 International

I will not go into further details about this part, but those interested can have a look at this very good paper:

TLR-4, IL-1R and TNF-R signaling to NF-kB: variations on a common theme

In particular, Figg. 1, 2, 3.

In the end, as a result of the activation process, the IkB inhibitor is degraded by the ubiquitin system, and the NK-kB dimer is free to migrate to the nucleus.

An important concept is that this is a “rapid-acting” response system, because the dimers are already present, in inactive form, in the cytoplasm, and must not be synthesized de novo: so the system is ready to respond to the activating signal.

The response.

But what is the cellular response?

Again, there are multiple and complex possible responses.

Essentially, this system is a major regulator of innate and adaptive immune responses. As such, it has a central role in the regulation of inflammation, in immunity, in autoimmune processes, and in cancer.

Moreover, the NF-kB system is rather ubiquitous, and is present and active in many different cell types. And, as we have seen, it can be activated by different stimuli, in different ways.

So, the important point is that the response to activation must be (at least):

  1. Lineage-specific
  2. Stimulus-specific

IOWs, different cells must be able to respond differently, and each cell type must respond differently to different stimuli. That gives a wide range of possible gene expression patterns at the transcription level.

The following paper is a good review of the topic:

Selectivity of the NF-κB Response

For example, IL2 is induced by NF-kB activayion in T cells, but not in B cells (lineage specific response). Moreover, specific cell types can undergo specific, and often different, cell destinies after NF-kB activation: for example, NK-kB is strongly involved in the control and regulation of T and B cell development.

From:

30 years of NF-κB: a blossoming of relevance to human pathobiology

“B and T lymphocytes induce NF-κB in adaptive immune responses through the CARD11:Bcl10:MALT1 (CBM) complex (Hayden and Ghosh, 2008). Newly expressed genes promote lymphocyte proliferation and specific immune functions including antibody production by B cells and the generation of cytokines and other anti-pathogen responses by T cells.”

And, in the same cell type, certain promoters regulated by NF-kB require additional signaling (for example,  in human dendritic cells promoters for Il6Il12b, and MCP-1 require additional p38 histone phosphorylation to be activated), while others can be activated directly (stimulus-specific response).

So, to sum up:

  1. A variety of stimuli can activate the system in different ways
  2. The system itself has its complexity (different dimers)
  3. The response can be widely different, according to the cell type where it happens, and to the type of stimuli that have activated the system, and probably according to other complex variables.
  4. The possible responses include a wide range of regulations of inflammation, of the immune system, of cell specifications or modifications, and so on.

How does it work?

So, what do we know about the working of such a system?

I will ignore, for the moment, the many complexities of the activation pathways, both canonical and non canonical, the role of cyotkines and receptors and IKK complexes, the many facets of NEMO and of the involvement of the ubiquitin system.

For simplicity, we will start with the activated system: the IkB inhibitor has been released from the inactive complex in the cytoplasm, and some form of NF-kB dimer is ready to migrate to the nucleus.

Let’s remember that the purpose of this OP is to show that the system works as a dynamic, far from equilibrium system, rather than as a “traditional” machine. And that such a way to work is an even more amazing example of design and functional complexity.

To do that; I will rely mainly on the recent paper quoted at the beginning:

Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

The paper is essentially about the NF-kB Target Selection Puzzle. IOWs, it tries to analyze what we know about the specificity of the response. How are specific patterns of transcription achieved after the activation of the system? What mechanisms allow the selection of the right genes to be transcribed (the targets) to implement the specific patterns according to cell type, context, and type of stimuli?

A “traditional” view of the system as a machine would try to establish rather fixed connections. For example, some type of dimer is connected to specific stimuli, and evokes specific gene patterns. Or some other components modulate the effect of NK-kB, generate diversification and specificity of the response.

Well, those ideas are not completely wrong. In a sense, the system does work also that way. Dimer specificity has a role. Other components have a role. In a sense, but only in a sense, the system works as though it were a traditional machine, and uses some of the mechanisms that we find in the concept of a traditional biological machine.

But that is only a tiny part of the real thing.

The real thing is that the system really works as a dynamic, far from equilibrium system, harnessing huge random/stochastic components to achieve robustness and complexity and flexibility of behavior in spite of all those non finalistic parts.

Let’s see how that happens, at least for the limited understanding we have of it. It is important to consider that this is a system that has been studied a lot, for decades, because of its central role in so many physiological and pathological contexts, and so we know many things. But still, our understanding is very limited, as you will see.

So, let’s go back to the paper. I will try to summarize as simply as possible the main concepts. Anyone who is really interested can refer to the paper itself.

Essentially, the paper analyzes three important and different aspects that contribute to the selection of targets at the genomic level by our TFs (IOWs, our NF-kB dimers, ready to migrate to the nucleus. As the title itself summarizes, they are:

  1. Abundance
  2. Affinity
  3. Binding site availability

1. Abundance

Abundance is referred here to two different variables: abundance of NF-kB Binding Sites in the genome and abundance of Nucleus-Localized NF-kB Dimers. Let’s consider them separately.

1a) Abundance of NF-kB Binding Sites in the genome:

It is well known that TFs bind specific sites in the genome. For NF-kB TFs, the following consensus kB site pattern has been found:

 5′-GGGRNWYYCC-3′

where R, W, Y, and N, respectively denote purine, adenine or thymine, pyrimidine, and any nucleotide.

That simply means that any sequence corresponding to that pattern in the genome can, in principle, bind NF-kB dimers.

So the problem is: how many such sequences do exist in the human genome?

Well, a study based on RelA has evaluated about 10^4 consensus sequences in the whole genome, but as NF-kB dimers seem to bind even incomplete consensus sites, the total number of potential binding sites could be nearer to 10^6

1b) Abundance of Nucleus-Localized NF-kB Dimers:

An estimate of the abundance of dimers in the nucleus after activation of the system is that about 1.5 × 10^5 molecules can be found, but again that is derived from studies about RelA only. Moreover, the number of molecules and type of dimer can probably vary much according to cell type.

So, the crucial variable, that is the ratio between binding sites and available dimers, and which could help undertsand the rate of sites saturation in the nucleus, remains rather undecided, and it seems very likely that it can vary a lot in different circumstances.

But there is another very interesting aspect about the concentration of dimers in the nucleus. According to some studies, NF-kB seems to generate oscillations of its nuclear content in some cell types, and those oscillation can be a way to generate specific transcription patterns:

NF-kB oscillations translate into functionally related patterns of gene expression

For example, this very recent paper :

NF-κB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration

shows at Fig. 3 the occupancy curve of binding sites at nuclear level after NF-kB activation in two different cell types.

In fibroblasts, the curve is a periodic oscillation, with a frequency that varies according to various factors, and translates into different transcription scenarios accordingly:

Gene expression dynamics scale with the period (g1) and amplitude (g2) of these oscillations, which are influenced by variables such as signal strength, duration, and receptor identity.


In macrophages, instead, the curve is rather:

a single, strong nuclear translocation event which persists for as long as the stimulus remains and tends to remain above baseline for an extended period of time.

In this case, the type of transcription will be probably regulated by the are under the curve, ratehr than by the period and amplitude of the oscialltions, as happened in fibroblasts.

Interestingly, while in previous studies it seemed that the concentration of nuclear dimers could be sufficient to saturate most or all binding sites, that has been found not to be the case in more recent studies. Again from the paper about abundance:

in fact, this lack of saturation of the system is necessary to generate stimulus- and cell-type specific gene expression profiles

Moreover, the binding itself seems to be rather short-lived:

Interestingly, it is now thought that most functional NF-kB interactions with chromatin—interactions that lead to a change in transcription—are fleeting… a subsequent study using FRAP in live cells expressing RelA-GFP showed that most RelA-DNA interactions are actually quite dynamic, with half-lives of a few seconds… Indeed, a recent study used single-molecule tracking of individual Halo-tagged RelA molecules in live cells to show that the majority (∼96%) of RelA undergoes short-lived interactions lasting on average ∼0.5 s, while just ∼4% of RelA molecules form more stable complexes with a lifetime of ∼4 s.

2. Affinity

Affinity of dimers for DNA sequences is not a clear cut matter. From the paper:

Biochemical DNA binding studies of a wide variety of 9–12 base-pair sequences have revealed that different NF-kB dimers bind far more sequences than previously thought, with different dimer species exhibiting specific but overlapping affinities for consensus and non-consensus kB site sequences.

IOWs, we have different dimers (15 different types) binding with varying affinity different DNA sequences (starting from the classical consensus sequence, but including also incomplete sequences). Remember that those sequences are rather short (the consensus sequence is 10 nucleotides long), and that there are thousands of such sequences in the genome.

Moreover, different bindings can affect transcription differently. Again, from the paper:

How might different consensus kB sites modulate the activity of the NF-kB dimers? Structure-function studies have shown that binding to different consensus kB sites can alter the conformation of the bound NF-kB dimers, thus dictating dimer function When an NF-kB dimer interacts with a DNA sequence, side chains of the amino  acids located in the DNA-binding domains of dimers contact the bases exposed in the groove of the DNA. For different consensus kB site sequences different bases are exposed in this groove, and NF-kB seems to alter its conformation to maximize interactions with the DNA and maintain high binding affinity. Changes in conformation may in turn impact NF-kB binding to co-regulators of transcription, whether these are activating or inhibitory, to specify the strength and dynamics of the transcriptional response. These findings again highlight how the huge array of kB binding site sequences must play a key role in modulating the transcription of target genes.

Quite a complex scenario, I would say!

But there is more:

Finally, as an additional layer of dimer and sequence-specific regulation, each of the subunits can be phosphorylated at multiple sites with, depending on the site, effects on nearly every step of NF-kB activation.

IOWs, the 15 dimers we have mentioned can be phosphorylated in many different ways, and that changes their binding affinities and their effects on transcription.

This section of the paper ends with a very interesting statement:

Overall, when considering the various ways in which NF-kB dimer abundances and their affinity for DNA can be modulated, it becomes clear that with these multiple cascading effects, small differences in consensus kB site sequences and small a priori differences in interaction affinities can ultimately have a large impact on the transcriptional response to NF-kB pathway activation.

Emphasis mine.

This is interesting, because in some way it seems to suggest that the whole system acts like a chaotic system, at least at some basic level. IOWs, small initial differences, maybe even random noise, can potentially affect deeply the general working of the whole systems.

Unless, of course, there is some higher, powerful level of control.

3. Availability of high affinity kB binding sequences

We have seen that there is a great abundance and variety of binding sequences for NF-kB dimers in the human genome. But, of course, those sequences are not necessarily available. Different cell types will have a different scenario of binding sites availability.

Why?

Because, as we know, the genome and chromatin are a very dynamic system, that can exist in many different states, continuosly changing in different cell types and, in the same cell type, in different conditions..

We know rather well the many levels of control that affect DNA and chromatin state. In brief, they are essentially:

  1. DNA methylation
  2. Histone modifications (methylation, acetylation, etc)
  3. Chromatin modifications
  4. Higher levels of organization, including nuclear localization and TADs (Topologically Associating Domains)

For example, from the paper:

The promoter regions of early response genes have abundant histone acetylation or trimethylation prior to stimulation [e.g., H3K27ac, (67) and H4K20me3, (66)], a chromatin state “poised” for immediate activation…  In contrast, promoters of late genes often have hypo-acetylated histones, requiring conformational changes to the chromatin to become accessible. They are therefore unable to recruit NF-kB for up to several hours after stimulation (68), due to the slow process of chromatin remodeling.

We must remember that each wave of NK-kB activation translates into the modified transcription of a lot of different genes at the genome level. It is therefore extremely important to consider what genes are available (IOWs, their promoters can be reached by the NF-kB signal) in each cell type and cell state.

The paper concludes:

Taken together, chromatin state and chromatin organization strongly influence the selection of DNA binding sites by NF-kB dimers and, most likely, the selection of the target genes that are regulated by these protein-DNA interaction events. Analyses that consider binding events in the context of three-dimensional nuclear organization and chromatin composition will be required to generate a more accurate view of the ways in which NF-kBDNA binding affects gene transcription.

This is the main scenario. But there are other components, that I have not considered in detail for the sake of brevity, for example competition between NF-kB dimers and the complex role and intervention of other co-regulators of transcription.

Does the system work?

But does the system work?

Of course it does. It is a central regulator, as we have said, of many extremely important biological processes, above all immunity. This is the system that decides how immune cells, T and B lymphocytes, have to behave, in terms of cell destiny and cell state. It is of huge relevance in all inflammatory responses, and in our defense against infections. It works, it works very well.

And what happens if it does not work properly?

Of course, like all very complex systems, errors can happen. Those interested can have a look at this recent paper:

30 years of NF-κB: a blossoming of relevance to human pathobiology

First of all, many serious genetic diseases have been linked to mutations in genes involved in the system. You can find a list in Table 1 of the above paper. Among them, for example, some forms of SCID, Severe combined immunodeficiency, one of the most severe genetic diseases of the immune system.

But, of course, a dysfunction of the NF-kB system has a very important role also in autoimmune diseases and in cancer.

Conclusions.

So, let’s try to sum up what we have seen here in the light of the original statement about biological systems that “are not machines”.

The NF-kB system is a perfect example. Even if we still understand very little of how it works, it is rather obvious that it is not a traditional machine.

A traditional machine would work differently. The signal would be transmitted from the membrane to the nucleus in the simplest possible way, without ambiguities and diversions. The Transcription Factor, once activated, would bind, at the level of the genome, very specific sites, each of them corresponding to a definite cascade of specific genes. The result would be clear cut, almost mechanical. Like a watch.

But that’s not the way things happen. There are myriads of variations, of ambiguities, of stochastic components.

The signal arrives to the membrane in multiple ways, very different one from the other: IL1, IL17, TNF, bacterial LPS, and immune activation of the B cell receptor (BCR) or the T cell receptor (TCR) are all possible signals.

The signal is translated to the NF-kB proteins in very different ways: canonical or non canonical activation, involving complex protein structures such as:

The CBM signalosome, intermediate between immune activation of BCR or TCR and canonical activation of the NF-kB. This complex is made of at least three proteins, CARD11, Bcl10 and MALT1.

The IKK complex in canonical activation: this is made of three proteins, IKK alpha, IKK beta, and NEMO. Its purpose is to phosphorylate the IkB, the inhibitor of the dimers, so that it can be ubiquinated and released from the dimer. Then the dimer can relocate to the nucleus.

Non canonical pathway: it involves the following phosphorylation cascade: NIK -> IKK alpha dimer -> Relb – p100 dimer -> Relb – p50 dimer (the final TF). It operates during the development of lymphoid organs and is responsible for the generation of B and T lymphocytes.

Different kinds of activated dimers relocate to the nucleus.

Different dimers, in varying abundance, interact with many different binding sites: complete or incomplete consensus sites, and probably others. The interaction is usually brief, and it can generate an oscillating pattern, or a more stable pattern

Completely different sets of genes are transcribed in different cell types and in different contexts, because of the interaction of NF-kB TFs with their promoters.

Many other factors and systems contribute to the final result.

The chromatin state of the cell at the moment of the NF-kB activation is essential to determine the accessibility of different binding sites, and therefore the final transcription pattern.

All these events and interactions are quick, unstable, far from equilibrium. A lot of possible random noise is involved.

In spite of that amazing complexity and potential stochastic nature of the system, reliable transcripion regulation and results are obtained in most cases. Those results are essential to immune cell differentiation, immune response, both innate and adaptive, inflammation, apoptosis, and many other crucial cellular processes.

So, let’s go back to our initial question.

Is this the working of a machine?

Of course it is! Because the results are purposeful, reasonably robust and reliable, and govern a lot of complex processes with remarkable elegance and efficiency.

But certainly, it is not a traditional machine. It is a lot more complex. It is a lot more beautiful and flexible.

It works with biological realities and not with transistors and switches. And biological realities are, by definition, far from equilibrium states, improbable forms of order that must continuously recreate themselves, fighting against the thermodynamic disorder and the intrinsic random noise that should apparently dominate any such scenario.

It is more similar to a set of extremely clever surfers who succeed in performing elegant and functional figures and motions in spite of the huge contrasting waves.

It is, from all points of view, amazing.

Now, Paley was absolutely right. No traditional machine, like a watch, could ever originate without design.

And if that is true of a watch, with its rather simple and fixed mechanisms, how much truer it must be for a system like NF-kB? Or, for that, like any cellular complex system?

Do you still have any doubts?

Added graphic: The evolutionary history, in terms of human conserved information, of the three proteins in the CBM signalosome.
On the y axis, homologies with the human protein as bits per aminoacid (bpa). On the x axis, approximate time of appearance in million of years.
The graphic shows the big information jump in vertebrates for all three protens , especially CARD11.


Added graphic: two very different proteins and their functional history


Added graphic (for Bill Cole). Functional history of Prp8, collagen, p53.
Comments
Did professor Art Hunt patent his brilliant “tornado” argument yet? Does he teach his “tornado” argument at UKY? :)jawa
September 18, 2019
September
09
Sep
18
18
2019
03:20 AM
3
03
20
AM
PDT
KF: "Now, ponder why in 90 years since Oparin et al, we are still stuck in speculative just so story myth-making like this." Please, be compassionate! It's really difficult to defent a theory which cannot be defended. As we have seen with the "arguments" at PS. :)gpuccio
September 18, 2019
September
09
Sep
18
18
2019
01:16 AM
1
01
16
AM
PDT
OLV: From the second paper you quote at #625:
How broadly expressed repressors regulate gene expression is incompletely understood. To gain insight, we investigated how Suppressor of Hairless—Su(H)—and Runt regulate expression of bone morphogenetic protein (BMP) antagonist short-gastrulation via the sog_Distal enhancer. A live imaging protocol was optimized to capture this enhancer’s spatiotemporal output throughout the early Drosophila embryo, finding in this context that Runt regulates transcription initiation, Su(H) regulates transcription rate, and both factors control spatial expression. Furthermore, whereas Su(H) functions as a dedicated repressor, Runt temporally switches from repressor to activator. Our results demonstrate that broad repressors play temporally distinct roles and contribute to dynamic gene expression. Both Run and Su(H)’s ability to influence the spatiotemporal domains of gene expression may serve to counterbalance activators and function in this manner as important regulators of the maternal-to-zygotic transition in early embryos.
Perfectly appropriate to this OP and thread! :)gpuccio
September 18, 2019
September
09
Sep
18
18
2019
01:14 AM
1
01
14
AM
PDT
PavelU: :)gpuccio
September 18, 2019
September
09
Sep
18
18
2019
01:11 AM
1
01
11
AM
PDT
OLV: No, I have not yet discussed translation in detail, especially at the ribosomal level. Maybe in the future. The paper you point to seems very interesting. I will read it with great attention. Thank you! :)gpuccio
September 18, 2019
September
09
Sep
18
18
2019
01:08 AM
1
01
08
AM
PDT
How to “Run” embryonic development https://thenode.biologists.com/how-to-run-embryonic-development/research/ Distinct Roles of Broadly Expressed Repressors Support Dynamic Enhancer Action and Change in Time https://www.sciencedirect.com/science/article/pii/S2211124719308368?via%3DihubOLV
September 17, 2019
September
09
Sep
17
17
2019
07:36 PM
7
07
36
PM
PDT
The below paper resolves the FI jumps and the OoL issue. Game over. ID fans should look for other things to be interested in.
Do you really believe this?bill cole
September 17, 2019
September
09
Sep
17
17
2019
07:51 AM
7
07
51
AM
PDT
PU, lessee:
Abstract We argue for [--> speculative, not a demonstration with solid empirical basis] the existence of an RNA sequence, called the AL (for ALpha) sequence, which may have played a role at the origin of life [--> Having invented the character, you stage the play, and the script]; this role entailed [--> galloping hypothesis, a speculative scenario is now projected as a factual premise implying conclusions] the AL sequence helping generate the first peptide assemblies via a primitive network [--> a speculation now suggested to be historic OoL fact] . These peptide assemblies included “infinite” proteins [--> evading the information and organisation challenge for functional systems using protein nanomachines]. The AL sequence was constructed on an economy principle [--> this is now alleged history] as the smallest RNA ring having one representative of each codon’s synonymy class and capable of adopting a non-functional but nevertheless evolutionarily [--> magic word] stable hairpin form that resisted [--> speculation treated as fact] denaturation due to environmental changes in pH, hydration, temperature, etc. Long subsequences from the AL ring resemble sequences from tRNAs and 5S rRNAs of numerous species like the proteobacterium, Rhodobacter sphaeroides. Pentameric subsequences from the AL are present more frequently than expected in current genomes, in particular, in genes encoding [--> codes and algorithms, empirically, come from what known adequate cause, please? And more broadly aren't codes manifestations of LANGUAGE, a strong sign of intelligent, purposeful action?] some of the proteins associated with ribosomes like tRNA synthetases. Such relics may help explain [--> speculation turned "fact" now "explains" actual observations] the existence of universal sequences like exon/intron frontier regions, Shine-Dalgarno sequence (present in bacterial and archaeal mRNAs), CRISPR and mitochondrial loop sequences.
See the chain of fallacies? Now, ponder why in 90 years since Oparin et al, we are still stuck in speculative just so story myth-making like this. KFkairosfocus
September 17, 2019
September
09
Sep
17
17
2019
04:31 AM
4
04
31
AM
PDT
The below paper resolves the FI jumps and the OoL issue. Game over. ID fans should look for other things to be interested in. Here’s the recent paper: Emergence of a “Cyclosome” in a Primitive Network Capable of Building “Infinite” Proteins https://www.mdpi.com/2075-1729/9/2/51PavelU
September 17, 2019
September
09
Sep
17
17
2019
03:52 AM
3
03
52
AM
PDT
GP, This paper may confirm that the fascinating topic of functional complexity and complex functionality associated with proteins is far from being wrapped up. Apparently there’s much work to be done in this area of research. Or at least that’s my impression. This recent paper may try to say something interesting: Nervous-Like Circuits in the Ribosome Facts, Hypotheses and Perspectives Youri Timsit Int. J. Mol. Sci. 2019, 20(12), 2911; by Daniel Bennequin https://www.mdpi.com/1422-0067/20/12/2911/htm
In the past few decades, studies on translation have converged towards the metaphor of a “ribosome nanomachine”; they also revealed intriguing ribosome properties challenging this view. Many studies have shown that to perform an accurate protein synthesis in a fluctuating cellular environment, ribosomes sense, transfer information and even make decisions. This complex “behaviour” that goes far beyond the skills of a simple mechanical machine has suggested that the ribosomal protein networks could play a role equivalent to nervous circuits at a molecular scale to enable information transfer and processing during translation. We analyse here the significance of this analogy and establish a preliminary link between two fields: ribosome structure-function studies and the analysis of information processing systems. This cross-disciplinary analysis opens new perspectives about the mechanisms of information transfer and processing in ribosomes and may provide new conceptual frameworks for the understanding of the behaviours of unicellular organisms. “in unicellular organisms, protein-based circuits act in place of a nervous system to control the behaviour” “because of the high degree of interconnection, systems of interacting proteins act as neural networks [...] to respond appropriately to patterns of extracellular stimuli” “the wiring of these networks depends on diffusion-limited encounters between molecules and for this and other reasons, they have unique features not found in conventional computer-based neural network”. The recent analysis of r-protein networks in the ribosomes of the three kingdoms [2] updates and further enhances this intriguing hypothesis. r-protein networks form complex circuits that differ from most known protein networks, in that they remain physically interconnected. these networks displayed some features of communication networks and an intriguing functional analogy with sensory-motor circuits found in simple organisms. these networks may play at a molecular scale, a role analogous to a sensory-motor nervous system, to assist and synchronize protein biosynthesis during translation. the nerve circuits do not have exactly the same properties that the ribosomal proteins circuits have Facts and Current Paradigms An Extensive Flow of Information Ribosome Choreography during Protein Biosynthesis Ribosome Heterogeneity and Open Questions Hypotheses Ribosome Behaviour The r-Protein–Neuron Equivalence Sensing the Ribosomal Functional Sites Transferring Information Molecular Synapses and Wires Molecular Communication A new Type of Allostery in r-Protein Networks Nervous-Like Circuits in the Ribosome? Number of Nodes, Connectivity and Evolution Functional Organization the r-protein networks may have an equivalent function to nervous systems at a nanoscale. Perspectives In conclusion, our study proposes that the r-protein networks may have an equivalent function to nervous systems at a nanoscale. These molecular systems are proposed to transfer and integrate the information flow that circulates between the remote functional sites of the ribosome to synchronize ribosome movements and to regulate the protein biosynthesis. Thus, r-proteins may collectively integrate the information taken from distinct sites and similar to a nervous circuit, may help to synchronize the correct tRNA recognition, the tRNA translocation and the growth of the nascent peptide. This hypothesis opens new perspectives in ribosome function, in the evolution of complex systems and in biomimetic technological research of nanoscale information transfer and processing. Considering a collective role of r-proteins may stimulate a new conceptual framework for both conceiving new antibiotics and better understanding the origin of ribosomopathies [86]. For example, mutations that impede the communication pathways such as the W255C [65] may have a general role in translation defects and pathologies. Inversely, specifically targeting some pathways in bacterial r-protein networks or sub-networks may help to produce new efficient antibiotics. On the other hand, this study stimulates and further characterizes and compares r-protein networks to understand how they have evolved. This would provide precious insights into the evolution of information processing in living organisms. It may also help to understand the complex behaviours of unicellular organisms that may use similar networks to integrate and respond to external stimuli. Finally, understanding the molecular mechanisms of information transmission and processing would constitute the basis for conceiving new computing nano-devices.
OLV
September 17, 2019
September
09
Sep
17
17
2019
02:58 AM
2
02
58
AM
PDT
GP: At the beginning of this OP you cite another OP you wrote over a year ago on the fascinating transcription topic. Have you done an OP on translation?OLV
September 17, 2019
September
09
Sep
17
17
2019
02:30 AM
2
02
30
AM
PDT
Alexa global internet traffic ranks Website.....………Rank……..Top % G ……………………1 …….0.0001 //Google AMZN…………….11 …….0.0001 //Amazon BS ………………..24 …….0.0001 // Blogspot B ………………….29 …….0.0001 // Bing WP ……………….54 …….0.0001 // Wordpress BG ………………801 …….0.001 // Biblegateway EN............147,621..........1 // EvolutionNews UD............669,566..........1 // this website SW.........1,323,657..........2 // Sandwalk PT..........2,199,162..........3 // Panda’s thumb TSZ.......3,056,322..........4 // the skeptical zone PS.........7,102,855..........8 // peaceful science BTW, apparently SW was in the top 1% not long ago. At one point PT apparently was in the top 1% too. Why have they dropped so drastically lately? Apparently TSZ was in the top 6% and has improved substantially (up 2%) Apparently PS was in the top 6% but has dropped dramatically 2%.jawa
September 14, 2019
September
09
Sep
14
14
2019
02:52 AM
2
02
52
AM
PDT
PeterA at #616: Interesting article about ERV functions. In the new OP about the immune system, I will probably discuss briefly another important example of (probably) transposon derived new fundamental proteins, RAG1 and RAG2. :)gpuccio
September 14, 2019
September
09
Sep
14
14
2019
12:37 AM
12
12
37
AM
PDT
Bill Cole: “ I am glad you stuck to your guns or kept your position on this issue.” Well, JS also kept his position on that issue and PS kept its position at the bottom of the ranking. :) Alexa global internet traffic ranks 2019-09-09 .........Rank........Top % ....... Today G ........................1 .......0.0001 AMZN................11 .......0.0001 BS ....................24 .......0.0001 B ......................29 .......0.0001 WP ...................54 .......0.0001 BG ..................801 .......0.001 EN ...........144,044 .......1 ....... 146,408 UD ...........641,186 .......1 ....... 654,747 SW ........1,341,960 .......2 ....... 1,271,258 PT .........2,108,860 .......3 ....... 2,117,137 TSZ .......3,329,139 .......4 ....... 3,054,286 PS .........7,074,812 ........8 ..... 7,099,106 The last few days SW and TSZ had substantial improvements in the ranking. The rest got worse.jawa
September 13, 2019
September
09
Sep
13
13
2019
01:36 AM
1
01
36
AM
PDT
GP: Here’s an interesting article in the EN website that cites papers related to something you’ve referred to before: https://evolutionnews.org/2019/09/waste-not-research-finds-that-far-from-junk-dna-ervs-perform-critical-cellular-functions/PeterA
September 13, 2019
September
09
Sep
13
13
2019
01:28 AM
1
01
28
AM
PDT
Very good GP. I am glad you stuck to your guns or kept your position on this issue.bill cole
September 10, 2019
September
09
Sep
10
10
2019
04:46 PM
4
04
46
PM
PDT
To all here: A few more words about tornadoes and the role of necessity. Dembski in his explanatory filter has explained very well that necessity must be reasonably excluded as a possible cause of what we observe, if we want to make a safe design inference. Speaking from the point of view of FI, I would like to stress again that FI is a measure of the improbability of obtaining a configuration implementing the function we are observing, as a result of the probability distributions that are working in the system. In that sense, necessity can be seen as the negation of FI: if some configuration is generated in the system as a result of well describable necessity laws, it means essentially that the probability of that configuration is 1. IOWs, there are no alternatives to that configuration in the system. Therefore, the FI is zero (target space = 1; search space =1; -log2 od 1/1 = 0). In a stochastic system, like the weather system on our planet, we can describe in part the evolution of the system by necessity laws, but there are also random configurations that emerge, and that can only be described by probability distributions. The important point is: any computation of FI refers only to the probabilistic component, because, as said, FI is zero when a necessity law acts and can explain what we observe. So, in the case of tornadoes, the point is: how much is necessity, how much is probabilistic? As said, I am not a meteorologist. Buy I believe that we can safely say that many events in the weather system has a strong necessity component, even if they cannot be explained completely by necessity, because many random variables are at play, too. However, weather forecast is a good science, and in many cases successful. Rains, winds, pressures, temperatures, can rather well be anticipated, in a certain measure. Tornadoes are more difficult, certainly. they are, essetially, a special kind of order, destructive order, that can be generated in the system. Again, not being a meteorologist, I quote, this time from a page from the National Geographic: https://www.nationalgeographic.com/environment/natural-disasters/tornadoes/
Tornadoes are vertical funnels of rapidly spinning air. ... Also known as twisters, tornadoes are born in thunderstorms and are often accompanied by hail. Giant, persistent thunderstorms called supercells spawn the most destructive tornadoes. These violent storms occur around the world, but the United States is a major hotspot with about a thousand tornadoes every year. ... What causes tornadoes? The most violent tornadoes come from supercells, large thunderstorms that have winds already in rotation. About one in a thousand storms becomes a supercell, and one in five or six supercells spawns off a tornado. ... Although they can occur at any time of the day or night, most tornadoes form in the late afternoon. By this time the sun has heated the ground and the atmosphere enough to produce thunderstorms. Tornadoes form when warm, humid air collides with cold, dry air. The denser cold air is pushed over the warm air, usually producing thunderstorms. The warm air rises through the colder air, causing an updraft. The updraft will begin to rotate if winds vary sharply in speed or direction. As the rotating updraft, called a mesocycle, draws in more warm air from the moving thunderstorm, its rotation speed increases. Cool air fed by the jet stream, a strong band of wind in the atmosphere, provides even more energy. Water droplets from the mesocyclone's moist air form a funnel cloud. The funnel continues to grow and eventually it descends from the cloud. When it touches the ground, it becomes a tornado.
Well, I would say that is a good explanation. Certainly, it does not explain everything. But it gives a good idea: certain conditions that occur rather often in the system, for example storms, generated tornadoes with a well known probability. "About one in a thousand storms becomes a supercell, and one in five or six supercells spawns off a tornado." Most of this is the result of well understood necessity laws operating in the system. The probability depends of course on some random variables: a storm must be there, temperatures, winds and other variables must form some specific configuration which can generate the tornado. But that specific configuration is not unlikely at all. Indeed, it happens about 1000 times a year in the US. Therefore, any attempt to intepret tornadoes as extremely unlikely events, considering all possibel states of water molecules, is simply wrong. Water molecules, in the weather system, are simply constrained by necessity laws, at least for the most part. What we must consider is the probability of the macrostates in the weather that are associated to tornadoes, and those macorstates are not unlikely at all. Therefore, tornadoes have extremely low FI, and require no design inference. IOWs, we need no tornado engineers. Now, am I critizing Art's analysis of tornadoes, while I make the same errors for proteins and other biological objects? Absolutely not. You see, it should be clear at this point that FI is about the probability of the target space linked to the function. In general, we compute it as -log2 of the ratio target space/search space. But the point is: the search space must be a true search space. IOWs, it must be the set of all possible states that are really available to the system, possibly with some grossly comparable probability. When strong necessity laws determine largely the outcome, as in the case of weather and tornadoes, the search space is not so big, because only a few states are really available to the system: those compatible with the necessity laws operating in its evolution. Water molecules have to follow those laws, to respect those constraints. Again, here it's the macrostates that count, because the microstates are largely constrained. So, a wind is not free to go in any direction. Rain cannot fall if clouds are not there. And so on. Is the system of protein coding genes the same type of system? Not at all. Let's see. Our system, whatever it is, will be essentially a pool of reproducing organisms with a genome. What we call a population. Now, let's say that the population has a definite genome, with its variability, and that the organisms reproduce themselves. What is the first necessity laws that works here? It is the simple fact that an organism reproduces itself by duplicating its genome, as precisely as possible. Why? Not because it is a law of nature, but because the organism is progarmmed to do exactly that thing. But, of course, we observe RV in reproducing organisms. It is the cause of novelty, otherwise genomes would remain essentially the same, with possibly some recombination (HGT, sexual reproduction). RV generates novelty. Now, RV has many forms, but in the end it changes something in the genome. The new genome changes a little. Some difference is generated, because some error takes place in the genome duplication. Now, let's consider protein coding genes. RV can change a nucleotide. That is the simplest case, probably the most common. Let's say that it is a mutation, not an indel, so the only change at protein level is, possibly, that one AA changes. If the mutation is not synonimous. OK? Now, the simple question is: is that variation constrained by strong necessity laws? And the answer is: no. Of course there are ncessity laws at play, and of course some variations are slightly more likely. The rate of variation can be different for different parts of the genome, and so on. But all thses considerations do not change the very simple fact: if we observe RV occurring in a protein coding gene, and we assume that no NS occurs, we are considering a random walk that, in the beginning, will explore just the sequence space that is nearer to the original sequence. But, after a few attempts, we are in the ocean of possible sequences, and there is really no known necessity laws that can favour some sequences over others. Especially if you consider that the necessity laws that cause the variation are acting on nucleotides, and that the functional result is instead an AA sequence, connected to the gene sequence only by the genetic code, which is certainly not known to the biochemical laws that operate the variations as errors in genome duplications. So, the point is: the space of all possible AA configurations for that sequence is really a search space, completely accessible by the system in all its parts as far as only RV is considered. There is no biochemical laws that excludes any posiible sequence, and for all practical purposes we can consider all possible sequences that are unrelated to the initial state as similarly probable. So, it is perfectly correct to use the space of all possible sequences as a search space available to the random system we are considering. These are not water molecules, constrained by well known necessity laws and macrostates. The different sequencesa are not constrained, at a probabilistic level. The only possible constraint here could be NS. Again, as I have always said, NS must be considered separately. But let's see briefly how it can act. If we start from an initial sequence that is functional, NS can only constrain change, and favour the conservation of the sequence. That's exactly what negative NS does in functional proteins. It is also the foundation fo my procedure to estimate FI. In some rare cases, if there is some space for optimization of the existing function, NS can favout that process. A process that, as discussed many times in detail, is severely limited in all known cases. Involving only a few AAs at most. However, optimization of an existing function is not generation of a new complex function. So, in general, negative NS operates against evolution. It just preserves what is already there, and is already functional. But to generate a new function, we must leave what already exists. That's why the origin of new protein families or superfamilies os often better conceived as happening in non functional sequences of the genome. Because there negative selection cannot work. What about positive NS? Of course, if we have a new complex function ready and operating, and if it gives a detectable reproductive advantage, it will possibly be positively selected. Fixed. But our new function, in the case we are discussing, is complex. Let's say that at least 500 specific bits must be found to implement that function, even at its simplest level. How can that function ever appear? Positive NS cannot act until the function appears. OK, if we are neo-darwinists (ehm... design deniers) we can dream that each single AA that is necessary to the new function can be positively selected for other reasons: it gives some increase in fiteness, or it just gets lucky in the lottery of genetic drift! OK. It's possible. But it is equally possible for all other possible variations, those variations that do not lead to our new protein, those variations that essentially lead nowhere, in the ocean of non functional sequences. Which, of course, are always extremely more likely than any functional one. So, why should we get lucky? Why should the intermediate bits of our final function be selected, if there is nothing special in them, nothing that makes them different from all other possible bits of variation, except that we know that, when 500 of them will be exactly what is needed, a new useful function will be there? That dream is the dream of getting Excel from Word, continuing to sell a constantly improved Word. One byte at a time. Good luck.gpuccio
September 10, 2019
September
09
Sep
10
10
2019
03:39 PM
3
03
39
PM
PDT
Gpuccio
Can you go from Word to Excel, using just small changes, say one byte at a time, so that each time you can sell the existing software better, because it has become more efficient? That’s exactly what deconstructing a complex function into small naturally selectable steps would be. No surprise they can’t succeed!
Agree. They underestimate the problem that the observed AA sequences bring to selection and drift innovating. The chances are astronomically higher that they will de innovate. :-)bill cole
September 10, 2019
September
09
Sep
10
10
2019
10:00 AM
10
10
00
AM
PDT
Bill Cole and others: As I often say: Can you go from Word to Excel, using just small changes, say one byte at a time, so that each time you can sell the existing software better, because it has become more efficient? That's exactly what deconstructing a complex function into small naturally selectable steps would be. No surprise they can't succeed!gpuccio
September 10, 2019
September
09
Sep
10
10
2019
08:51 AM
8
08
51
AM
PDT
Bill Cole: "I have not seen empirical evidence that these pathways exist across protein families." Neither have I. All known examples of NS are very short optimizations, usually of some degradation of existing complex proteins, as in antibiotic resistance, as well explained by Behe, who seems to be so despised at PS that I could hardly mention his name without being reprimanded! :)gpuccio
September 10, 2019
September
09
Sep
10
10
2019
08:47 AM
8
08
47
AM
PDT
Gpuccio
If they want to go on dreaming that complex proteins can be deconstructed into naturally selectable steps, they just have to show that this is true. I am not aware of those evolutionary pathways, nor of any reasonable motive why they should exist, if not in the imagination of our interlocutors.
The would have to be designed to exist IMO given the size of the sequence space. I have not seen empirical evidence that these pathways exist across protein families.bill cole
September 10, 2019
September
09
Sep
10
10
2019
06:49 AM
6
06
49
AM
PDT
Mike1962, “Blind Watchmaker Devotees. BWDs for short.” :)jawa
September 9, 2019
September
09
Sep
9
09
2019
10:45 PM
10
10
45
PM
PDT
Gpuccio: Don’t know well how to name them as a whole: design deniers? Blind Watchmaker Devotees. BWDs for short. Blind Evolution Devotees. BEDs for short. Maybe others can suggest summore candidates.mike1962
September 9, 2019
September
09
Sep
9
09
2019
03:32 PM
3
03
32
PM
PDT
GP: Here's a list of articles on the topic "Analysis of RNA Polymerase II complexes". Is this related to the current topic in this discussion? Perhaps you cited some of these articles before. Role of integrative structural biology in understanding transcriptional initiation Functional assays for transcription mechanisms in high-throughput Full list.OLV
September 9, 2019
September
09
Sep
9
09
2019
02:18 PM
2
02
18
PM
PDT
GP @600:
1) How can low but non trivial levels of FI arise in the extremely complex and organized protein engineering system which is the immune system? My new OP is meant to propose some detailed analysis of that question. 2) How can a complex and extremely efficient protein engineering system like the immune system arise in a system (the biological system) which has no tools to engineer it? You, like me, certainly know that there is only one reasonable answer to that question.
I like the two questions and answers. And I agree. \OLV
September 9, 2019
September
09
Sep
9
09
2019
10:31 AM
10
10
31
AM
PDT
Bill Cole @601: I see your point. Let's wait and see. Thanks.jawa
September 9, 2019
September
09
Sep
9
09
2019
10:24 AM
10
10
24
AM
PDT
Very interesting discussion here.PaoloV
September 9, 2019
September
09
Sep
9
09
2019
10:19 AM
10
10
19
AM
PDT
Bill Cole: "Joe Felsenstein are arguing for lots of FI being generated by evolution in small increments." Again! Just to clarify for the nth time. FI refers to the improbability of one single function to arise by purely random effects in the system we are considering. So, it is the number of specific bits necessary for that single function to be present. The function is considered as non deconstructable into simpler selectable steps, because if we introduce selection the evaluation must be different. Complex functions are not, as a rule, deconstructable into simpler increasingly functional steps. Least of all into naturally selectable steps. If they want to go on dreaming that complex proteins can be deconstructed into naturally selectable steps, they just have to show that this is true. I am not aware of those evolutionary pathways, nor of any reasonable motive why they should exist, if not in the imagination of our interlocutors. If some naturally selectable intermediate can be shown to exist, it's not a problem. As I have explained, the FI of a decosntructable protein is more or less equal to the FI of the most complex step in the deconstruction. So, given a real decosntruction, it's rather easy to update the computation of FI. It should be clear that our interlocutors have a simple way to falsify not ID itself, but its application to proteins. And that would certainly be a huge success for them, and a severe argument against ID theory in general. or at least against biological ID. They only need to demonstrate that the proteins that appear to be complex, to have high FI, say more than 500 bits, can as a rule be deconstructed into simpler, naturally selectable steps, well in the range of the probabilistic resources of the biological system where they arise. It's not really necessary thatthey do that for all proteins. If they can demonstrate that such a thing can be done for a relevant number of complex proteins, that would be enough. Maybe they could start with one! :) IOWs and very simply: the only case of true FI beyond a relevant threshold, in all the examples we have discussed about safes, is the big safe. That is the only case of an object with 100 bits of FI (which, of course, could be 500, or whatever we like). All the rest is only wishful thinking from our interlocutors, more or less in good faith, but completely wrong. And the simple fact remains that an object with more than 500 bits of FI will never arise in a non biological, non designed system. Tornadoes and starry skies are no counter-examples. And, of course, if we are ready to reason correctly and without dogmas, that means that such an object will never arise in a non designed biological system, too. Once we have clarified that NS being able to do that is only a myth.gpuccio
September 9, 2019
September
09
Sep
9
09
2019
09:44 AM
9
09
44
AM
PDT
It doesn't matter how much FI is in a protein because it remains that "they" don't have a mechanism capable of producing proteins. All proteins exist in existing life. Nature cannot produce them. Also, ask them how they determined beta lactamase evolved via blind and mindless processes.ET
September 9, 2019
September
09
Sep
9
09
2019
07:05 AM
7
07
05
AM
PDT
Jawa
I doubt it. Good luck.
We found common ground on the last debate. They are not ready to throw out evolutionary theory so they need to keep the plausibility of the current mechanisms alive. What Gpuccio is correctly pointing out is this type of argument leads to inconsistencies. Steve Schaffer and Joe Felsenstein are arguing for lots of FI being generated by evolution in small increments. Art Hunt and Rumraket have seen the light and realize that 500 bits of FI is very unlikely achievable by evolutionary mechanisms and so are arguing for low FI in proteins. The problem is when you put all the augments all together and take their low ball number we still end up with 14000 bits in the spliceosome as it is made of 200 proteins that work together. As such the first transition between prokaryotic and eukaryotic evolution by natural selection and neutral mutations fails. Gpuccio's argument created this divide which they must reconcile. This is perhaps the path foward to common ground.bill cole
September 9, 2019
September
09
Sep
9
09
2019
06:20 AM
6
06
20
AM
PDT
1 3 4 5 6 7 25

Leave a Reply