Uncommon Descent Serving The Intelligent Design Community

Controlling the waves of dynamic, far from equilibrium states: the NF-kB system of transcription regulation.

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

I have recently commented on another thread:

about a paper that (very correctly) describes cells as dynamic, far from equilibrium systems, rather than as “traditional” machines.

That is true. But, of course, the cell implements the same functions as complex machines do, and much more. My simple point is that, to do that, you need much greater functional complexity than you need to realize a conventional machine.

IOWs, dynamic, far from equilibrium systems that can be as successful as a conventional machine, or more, must certainly be incredibly complex and amazing systems, systems that defy everything else that we already know and that we can conceive. They must not only implement their functional purposes, but they must do that by “harnessing” the constantly changing waves of change, of random noise, of improbability. I have commented on those ideas in the mentioned thread, at posts #5 and #8, and I have quoted at posts #11 and #12 a couple of interesting and pertinent papers, introducing the important concept of robustness: the ability to achieve reliable functional results in spite of random noise and disturbing variation.

In this OP, I would like to present in some detail a very interesting system that shows very well what we can understand, at present, of that kind of amazing systems.

The system I will discuss here is an old friend: it is the NF-kB system of transcription factors (nuclear factor kappa-light-chain-enhancer of activated B cells). We are speaking, therefore, of transcription regulation, a very complex topic that I have already discussed in some depth here:

I will remind here briefly that transcription regulation is the very complex process that allows cells to be completely different using the same genomic information: IOWs, each type of cell “reads” differently the genes in the common genome, and that allows the different types of cell differentiation and the different cell responses in the same cell type.

Transcription regulation relies on many different levels of control, that are summarized in the above quoted OP, but a key role is certainly played by Transcription Factors (TFs), proteins that bind DNA and act as activators or inhibitors of transcription at specific sites.

TFs are a fascinating class of proteins. There are a lot of them (1600 – 2000 in humans, almost 10% of all proteins), and they are usually medium sized proteins, about 500 AA long, containing at least one highly conserved domain, the DNA binding domain (DBD), and other, often less understood, functional components.

I quote again here a recent review about human TFs:

The Human Transcription Factors

The NK-kB system is a system of TFs. I have discussed it in some detail in the discussion following the Ubiquitin thread, but I will describe it in a more systematic way here.

In general, I will refer a lot to this very recent paper about it:

Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

The NF-kB system relies essentially on 5 different TFs (see Fig. 1 A in the paper):

  1. RelA  (551 AAs)
  2. RelB  (579 AAs)
  3. c-Rel  (619 AAs)
  4. p105/p50
    (968 AAs)
  5. p100/p52  (900 AAs)

Those 5 TFs work forming dimers, homodimers or heterodimers, for a total of 15 possible compbinations, all of which have been found to work in the cell, even if some of them are much more common.

Then there are at least 4 inhibitor proteins, collectively called IkBs.

The mechanism is apparently simple enough. The dimers are inhibited by IkBs and therefore they remain in the cytoplasm in inactive form.

When an appropriate signal arrives to the cell and is received by a membrane receptor, the inhibitor (the IkB molecule) is phosphorylated and then ubiquinated and detached from the complex. This is done by a protein complex called IKK. The free dimer can then migrate to the nucleus and localize there, where it can act as a TF, binding DNA.

This is the canonical activation pathway, summarized in Fig. 1. There is also a non canonical activation pathway, that we will not discuss for the moment.


Mechanism of NF-κB action. In this figure, the NF-κB heterodimer consisting of Rel and p50 proteins is used as an example. While in an inactivated state, NF-κB is located in the cytosol complexed with the inhibitory protein IκBα. Through the intermediacy of integral membrane receptors, a variety of extracellular signals can activate the enzyme IκB kinase (IKK). IKK, in turn, phosphorylates the IκBα protein, which results in ubiquitination, dissociation of IκBα from NF-κB, and eventual degradation of IκBα by the proteasome. The activated NF-κB is then translocated into the nucleus where it binds to specific sequences of DNA called response elements (RE). The DNA/NF-κB complex then recruits other proteins such as coactivators and RNA polymerase, which transcribe downstream DNA into mRNA. In turn, mRNA is translated into protein, resulting in a change of cell function.

Attribution: Boghog2 at English Wikipedia [Public domain]

Now, the purpose of this OP is to show, in greater detail, how this mechanism, apparently moderately simple, is indeed extremely complex and dynamic. Let’s see.

The stimuli.

First of all, we must understand what are the stimuli that, arriving to the cell membrane, are capable to activate the NF-kB system. IOWs, what are the signals that work as inputs.

The main concept is: the NF-kB system is a central pathway activated by many stimuli:

  1. Inflammation
  2. Stress
  3. Free
    radicals
  4. Infections
  5. Radiation
  6. Immune
    stimulation

IOWs, a wide variety of aggressive stimuli can activate the system

The extracellular signal arrives to the cell usually through specific cytokines, for example TNF, IL1, or through pathogen associated molecules, like bacterial lipopolysaccharides (LPS). Of course there are different and specific membrane receptors, in particular IL-1R (for IL1) , TNF-R (for TNF), and many TLRs (Toll like receptors, for pathogen associated structures). A special kind of activation is implemented, in B and T lymphocytes, by the immune activation of the specific receptors for antigen epitopes (B cell receptor, BCR, and T cell receptor, TCR).

The process through which the activated receptor can activate the NF-kB dimer is rather complex: it involves, in the canonical pathway, a macromolecular complex called IKK (IkB kinase) complex, comprising two catalytic kinase subunits (IKKa and IKKb) and a regulatory protein (IKKg/NEMO), and involving in multiple and complex ways the ubiquitin system. The non canonical pathway is a variation of that. Finally, a specific protein complex (CBM complex or CBM signalosome) mediates the transmission from the immune BCR or TCR to the canonical pathway. See Fig. 2:

From: NF-κB Activation in Lymphoid Malignancies: Genetics, Signaling, and Targeted Therapy – Scientific Figure on ResearchGate. Available from: https://www.researchgate.net/figure/Increased-activity-of-the-CARMA1-BCL10-MALT1-signalosome-drives-constitutive-NF-kB_fig2_324089636 [accessed 10 Jul, 2019]
Figure 3 – NF-κB Activation in Lymphoid Malignancies: Genetics, Signaling, and Targeted Therapy
available via license: Creative Commons Attribution 4.0 International

I will not go into further details about this part, but those interested can have a look at this very good paper:

TLR-4, IL-1R and TNF-R signaling to NF-kB: variations on a common theme

In particular, Figg. 1, 2, 3.

In the end, as a result of the activation process, the IkB inhibitor is degraded by the ubiquitin system, and the NK-kB dimer is free to migrate to the nucleus.

An important concept is that this is a “rapid-acting” response system, because the dimers are already present, in inactive form, in the cytoplasm, and must not be synthesized de novo: so the system is ready to respond to the activating signal.

The response.

But what is the cellular response?

Again, there are multiple and complex possible responses.

Essentially, this system is a major regulator of innate and adaptive immune responses. As such, it has a central role in the regulation of inflammation, in immunity, in autoimmune processes, and in cancer.

Moreover, the NF-kB system is rather ubiquitous, and is present and active in many different cell types. And, as we have seen, it can be activated by different stimuli, in different ways.

So, the important point is that the response to activation must be (at least):

  1. Lineage-specific
  2. Stimulus-specific

IOWs, different cells must be able to respond differently, and each cell type must respond differently to different stimuli. That gives a wide range of possible gene expression patterns at the transcription level.

The following paper is a good review of the topic:

Selectivity of the NF-κB Response

For example, IL2 is induced by NF-kB activayion in T cells, but not in B cells (lineage specific response). Moreover, specific cell types can undergo specific, and often different, cell destinies after NF-kB activation: for example, NK-kB is strongly involved in the control and regulation of T and B cell development.

From:

30 years of NF-κB: a blossoming of relevance to human pathobiology

“B and T lymphocytes induce NF-κB in adaptive immune responses through the CARD11:Bcl10:MALT1 (CBM) complex (Hayden and Ghosh, 2008). Newly expressed genes promote lymphocyte proliferation and specific immune functions including antibody production by B cells and the generation of cytokines and other anti-pathogen responses by T cells.”

And, in the same cell type, certain promoters regulated by NF-kB require additional signaling (for example,  in human dendritic cells promoters for Il6Il12b, and MCP-1 require additional p38 histone phosphorylation to be activated), while others can be activated directly (stimulus-specific response).

So, to sum up:

  1. A variety of stimuli can activate the system in different ways
  2. The system itself has its complexity (different dimers)
  3. The response can be widely different, according to the cell type where it happens, and to the type of stimuli that have activated the system, and probably according to other complex variables.
  4. The possible responses include a wide range of regulations of inflammation, of the immune system, of cell specifications or modifications, and so on.

How does it work?

So, what do we know about the working of such a system?

I will ignore, for the moment, the many complexities of the activation pathways, both canonical and non canonical, the role of cyotkines and receptors and IKK complexes, the many facets of NEMO and of the involvement of the ubiquitin system.

For simplicity, we will start with the activated system: the IkB inhibitor has been released from the inactive complex in the cytoplasm, and some form of NF-kB dimer is ready to migrate to the nucleus.

Let’s remember that the purpose of this OP is to show that the system works as a dynamic, far from equilibrium system, rather than as a “traditional” machine. And that such a way to work is an even more amazing example of design and functional complexity.

To do that; I will rely mainly on the recent paper quoted at the beginning:

Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

The paper is essentially about the NF-kB Target Selection Puzzle. IOWs, it tries to analyze what we know about the specificity of the response. How are specific patterns of transcription achieved after the activation of the system? What mechanisms allow the selection of the right genes to be transcribed (the targets) to implement the specific patterns according to cell type, context, and type of stimuli?

A “traditional” view of the system as a machine would try to establish rather fixed connections. For example, some type of dimer is connected to specific stimuli, and evokes specific gene patterns. Or some other components modulate the effect of NK-kB, generate diversification and specificity of the response.

Well, those ideas are not completely wrong. In a sense, the system does work also that way. Dimer specificity has a role. Other components have a role. In a sense, but only in a sense, the system works as though it were a traditional machine, and uses some of the mechanisms that we find in the concept of a traditional biological machine.

But that is only a tiny part of the real thing.

The real thing is that the system really works as a dynamic, far from equilibrium system, harnessing huge random/stochastic components to achieve robustness and complexity and flexibility of behavior in spite of all those non finalistic parts.

Let’s see how that happens, at least for the limited understanding we have of it. It is important to consider that this is a system that has been studied a lot, for decades, because of its central role in so many physiological and pathological contexts, and so we know many things. But still, our understanding is very limited, as you will see.

So, let’s go back to the paper. I will try to summarize as simply as possible the main concepts. Anyone who is really interested can refer to the paper itself.

Essentially, the paper analyzes three important and different aspects that contribute to the selection of targets at the genomic level by our TFs (IOWs, our NF-kB dimers, ready to migrate to the nucleus. As the title itself summarizes, they are:

  1. Abundance
  2. Affinity
  3. Binding site availability

1. Abundance

Abundance is referred here to two different variables: abundance of NF-kB Binding Sites in the genome and abundance of Nucleus-Localized NF-kB Dimers. Let’s consider them separately.

1a) Abundance of NF-kB Binding Sites in the genome:

It is well known that TFs bind specific sites in the genome. For NF-kB TFs, the following consensus kB site pattern has been found:

 5′-GGGRNWYYCC-3′

where R, W, Y, and N, respectively denote purine, adenine or thymine, pyrimidine, and any nucleotide.

That simply means that any sequence corresponding to that pattern in the genome can, in principle, bind NF-kB dimers.

So the problem is: how many such sequences do exist in the human genome?

Well, a study based on RelA has evaluated about 10^4 consensus sequences in the whole genome, but as NF-kB dimers seem to bind even incomplete consensus sites, the total number of potential binding sites could be nearer to 10^6

1b) Abundance of Nucleus-Localized NF-kB Dimers:

An estimate of the abundance of dimers in the nucleus after activation of the system is that about 1.5 × 10^5 molecules can be found, but again that is derived from studies about RelA only. Moreover, the number of molecules and type of dimer can probably vary much according to cell type.

So, the crucial variable, that is the ratio between binding sites and available dimers, and which could help undertsand the rate of sites saturation in the nucleus, remains rather undecided, and it seems very likely that it can vary a lot in different circumstances.

But there is another very interesting aspect about the concentration of dimers in the nucleus. According to some studies, NF-kB seems to generate oscillations of its nuclear content in some cell types, and those oscillation can be a way to generate specific transcription patterns:

NF-kB oscillations translate into functionally related patterns of gene expression

For example, this very recent paper :

NF-κB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration

shows at Fig. 3 the occupancy curve of binding sites at nuclear level after NF-kB activation in two different cell types.

In fibroblasts, the curve is a periodic oscillation, with a frequency that varies according to various factors, and translates into different transcription scenarios accordingly:

Gene expression dynamics scale with the period (g1) and amplitude (g2) of these oscillations, which are influenced by variables such as signal strength, duration, and receptor identity.


In macrophages, instead, the curve is rather:

a single, strong nuclear translocation event which persists for as long as the stimulus remains and tends to remain above baseline for an extended period of time.

In this case, the type of transcription will be probably regulated by the are under the curve, ratehr than by the period and amplitude of the oscialltions, as happened in fibroblasts.

Interestingly, while in previous studies it seemed that the concentration of nuclear dimers could be sufficient to saturate most or all binding sites, that has been found not to be the case in more recent studies. Again from the paper about abundance:

in fact, this lack of saturation of the system is necessary to generate stimulus- and cell-type specific gene expression profiles

Moreover, the binding itself seems to be rather short-lived:

Interestingly, it is now thought that most functional NF-kB interactions with chromatin—interactions that lead to a change in transcription—are fleeting… a subsequent study using FRAP in live cells expressing RelA-GFP showed that most RelA-DNA interactions are actually quite dynamic, with half-lives of a few seconds… Indeed, a recent study used single-molecule tracking of individual Halo-tagged RelA molecules in live cells to show that the majority (∼96%) of RelA undergoes short-lived interactions lasting on average ∼0.5 s, while just ∼4% of RelA molecules form more stable complexes with a lifetime of ∼4 s.

2. Affinity

Affinity of dimers for DNA sequences is not a clear cut matter. From the paper:

Biochemical DNA binding studies of a wide variety of 9–12 base-pair sequences have revealed that different NF-kB dimers bind far more sequences than previously thought, with different dimer species exhibiting specific but overlapping affinities for consensus and non-consensus kB site sequences.

IOWs, we have different dimers (15 different types) binding with varying affinity different DNA sequences (starting from the classical consensus sequence, but including also incomplete sequences). Remember that those sequences are rather short (the consensus sequence is 10 nucleotides long), and that there are thousands of such sequences in the genome.

Moreover, different bindings can affect transcription differently. Again, from the paper:

How might different consensus kB sites modulate the activity of the NF-kB dimers? Structure-function studies have shown that binding to different consensus kB sites can alter the conformation of the bound NF-kB dimers, thus dictating dimer function When an NF-kB dimer interacts with a DNA sequence, side chains of the amino  acids located in the DNA-binding domains of dimers contact the bases exposed in the groove of the DNA. For different consensus kB site sequences different bases are exposed in this groove, and NF-kB seems to alter its conformation to maximize interactions with the DNA and maintain high binding affinity. Changes in conformation may in turn impact NF-kB binding to co-regulators of transcription, whether these are activating or inhibitory, to specify the strength and dynamics of the transcriptional response. These findings again highlight how the huge array of kB binding site sequences must play a key role in modulating the transcription of target genes.

Quite a complex scenario, I would say!

But there is more:

Finally, as an additional layer of dimer and sequence-specific regulation, each of the subunits can be phosphorylated at multiple sites with, depending on the site, effects on nearly every step of NF-kB activation.

IOWs, the 15 dimers we have mentioned can be phosphorylated in many different ways, and that changes their binding affinities and their effects on transcription.

This section of the paper ends with a very interesting statement:

Overall, when considering the various ways in which NF-kB dimer abundances and their affinity for DNA can be modulated, it becomes clear that with these multiple cascading effects, small differences in consensus kB site sequences and small a priori differences in interaction affinities can ultimately have a large impact on the transcriptional response to NF-kB pathway activation.

Emphasis mine.

This is interesting, because in some way it seems to suggest that the whole system acts like a chaotic system, at least at some basic level. IOWs, small initial differences, maybe even random noise, can potentially affect deeply the general working of the whole systems.

Unless, of course, there is some higher, powerful level of control.

3. Availability of high affinity kB binding sequences

We have seen that there is a great abundance and variety of binding sequences for NF-kB dimers in the human genome. But, of course, those sequences are not necessarily available. Different cell types will have a different scenario of binding sites availability.

Why?

Because, as we know, the genome and chromatin are a very dynamic system, that can exist in many different states, continuosly changing in different cell types and, in the same cell type, in different conditions..

We know rather well the many levels of control that affect DNA and chromatin state. In brief, they are essentially:

  1. DNA methylation
  2. Histone modifications (methylation, acetylation, etc)
  3. Chromatin modifications
  4. Higher levels of organization, including nuclear localization and TADs (Topologically Associating Domains)

For example, from the paper:

The promoter regions of early response genes have abundant histone acetylation or trimethylation prior to stimulation [e.g., H3K27ac, (67) and H4K20me3, (66)], a chromatin state “poised” for immediate activation…  In contrast, promoters of late genes often have hypo-acetylated histones, requiring conformational changes to the chromatin to become accessible. They are therefore unable to recruit NF-kB for up to several hours after stimulation (68), due to the slow process of chromatin remodeling.

We must remember that each wave of NK-kB activation translates into the modified transcription of a lot of different genes at the genome level. It is therefore extremely important to consider what genes are available (IOWs, their promoters can be reached by the NF-kB signal) in each cell type and cell state.

The paper concludes:

Taken together, chromatin state and chromatin organization strongly influence the selection of DNA binding sites by NF-kB dimers and, most likely, the selection of the target genes that are regulated by these protein-DNA interaction events. Analyses that consider binding events in the context of three-dimensional nuclear organization and chromatin composition will be required to generate a more accurate view of the ways in which NF-kBDNA binding affects gene transcription.

This is the main scenario. But there are other components, that I have not considered in detail for the sake of brevity, for example competition between NF-kB dimers and the complex role and intervention of other co-regulators of transcription.

Does the system work?

But does the system work?

Of course it does. It is a central regulator, as we have said, of many extremely important biological processes, above all immunity. This is the system that decides how immune cells, T and B lymphocytes, have to behave, in terms of cell destiny and cell state. It is of huge relevance in all inflammatory responses, and in our defense against infections. It works, it works very well.

And what happens if it does not work properly?

Of course, like all very complex systems, errors can happen. Those interested can have a look at this recent paper:

30 years of NF-κB: a blossoming of relevance to human pathobiology

First of all, many serious genetic diseases have been linked to mutations in genes involved in the system. You can find a list in Table 1 of the above paper. Among them, for example, some forms of SCID, Severe combined immunodeficiency, one of the most severe genetic diseases of the immune system.

But, of course, a dysfunction of the NF-kB system has a very important role also in autoimmune diseases and in cancer.

Conclusions.

So, let’s try to sum up what we have seen here in the light of the original statement about biological systems that “are not machines”.

The NF-kB system is a perfect example. Even if we still understand very little of how it works, it is rather obvious that it is not a traditional machine.

A traditional machine would work differently. The signal would be transmitted from the membrane to the nucleus in the simplest possible way, without ambiguities and diversions. The Transcription Factor, once activated, would bind, at the level of the genome, very specific sites, each of them corresponding to a definite cascade of specific genes. The result would be clear cut, almost mechanical. Like a watch.

But that’s not the way things happen. There are myriads of variations, of ambiguities, of stochastic components.

The signal arrives to the membrane in multiple ways, very different one from the other: IL1, IL17, TNF, bacterial LPS, and immune activation of the B cell receptor (BCR) or the T cell receptor (TCR) are all possible signals.

The signal is translated to the NF-kB proteins in very different ways: canonical or non canonical activation, involving complex protein structures such as:

The CBM signalosome, intermediate between immune activation of BCR or TCR and canonical activation of the NF-kB. This complex is made of at least three proteins, CARD11, Bcl10 and MALT1.

The IKK complex in canonical activation: this is made of three proteins, IKK alpha, IKK beta, and NEMO. Its purpose is to phosphorylate the IkB, the inhibitor of the dimers, so that it can be ubiquinated and released from the dimer. Then the dimer can relocate to the nucleus.

Non canonical pathway: it involves the following phosphorylation cascade: NIK -> IKK alpha dimer -> Relb – p100 dimer -> Relb – p50 dimer (the final TF). It operates during the development of lymphoid organs and is responsible for the generation of B and T lymphocytes.

Different kinds of activated dimers relocate to the nucleus.

Different dimers, in varying abundance, interact with many different binding sites: complete or incomplete consensus sites, and probably others. The interaction is usually brief, and it can generate an oscillating pattern, or a more stable pattern

Completely different sets of genes are transcribed in different cell types and in different contexts, because of the interaction of NF-kB TFs with their promoters.

Many other factors and systems contribute to the final result.

The chromatin state of the cell at the moment of the NF-kB activation is essential to determine the accessibility of different binding sites, and therefore the final transcription pattern.

All these events and interactions are quick, unstable, far from equilibrium. A lot of possible random noise is involved.

In spite of that amazing complexity and potential stochastic nature of the system, reliable transcripion regulation and results are obtained in most cases. Those results are essential to immune cell differentiation, immune response, both innate and adaptive, inflammation, apoptosis, and many other crucial cellular processes.

So, let’s go back to our initial question.

Is this the working of a machine?

Of course it is! Because the results are purposeful, reasonably robust and reliable, and govern a lot of complex processes with remarkable elegance and efficiency.

But certainly, it is not a traditional machine. It is a lot more complex. It is a lot more beautiful and flexible.

It works with biological realities and not with transistors and switches. And biological realities are, by definition, far from equilibrium states, improbable forms of order that must continuously recreate themselves, fighting against the thermodynamic disorder and the intrinsic random noise that should apparently dominate any such scenario.

It is more similar to a set of extremely clever surfers who succeed in performing elegant and functional figures and motions in spite of the huge contrasting waves.

It is, from all points of view, amazing.

Now, Paley was absolutely right. No traditional machine, like a watch, could ever originate without design.

And if that is true of a watch, with its rather simple and fixed mechanisms, how much truer it must be for a system like NF-kB? Or, for that, like any cellular complex system?

Do you still have any doubts?

Added graphic: The evolutionary history, in terms of human conserved information, of the three proteins in the CBM signalosome.
On the y axis, homologies with the human protein as bits per aminoacid (bpa). On the x axis, approximate time of appearance in million of years.
The graphic shows the big information jump in vertebrates for all three protens , especially CARD11.


Added graphic: two very different proteins and their functional history


Added graphic (for Bill Cole). Functional history of Prp8, collagen, p53.
Comments
To all here: Wrong examples of FI: First of all, I want to clarify why the attacks made to the concept of FI in the last part of the discussion there were completely wrong. It is strange to see that the need to dicredit, or simply deny, FI and its importance is, in the end, the main argument used against ID theory. This is surprising in a way, because some certainly understand what FI is and why it is so important even in the field of our "interlocutors", After all, my concept, and even my definition, of FI is not different from Szostak's. And Arthur Hunt has expressed some interest in the concept, if I understand well his comments. However, most of them (I have some problem in not using the easy term "neo-darwinists", after the fierce declaration by Swamidass that no such thing exists any more. Don't know well how to name them as a whole: design deniers? :) ), most of "the", I was saying, seems to have as highest priority to discredit FI in all its forms. This strategy takes usually one of many different ways: a) To deny that FI exists b) To deny that it can be measured in any real system c) To affirm that it exists, but there is not a lot of it in proteins, or in biological objects d) To affirm that it exists, and there is a lot of it in all biological objects, even those that are relatively simple e) To affirm that it exists, and there is a lot of it in non designed, non biologcial objects All of those ideas are wrong. Of course, for different reasons. But it is interesting to observe how the need to deny something can take so many different, and frankly opposing, pathways. However, here I want to deal briefly with option e), and in particular with the examples provided by Swamidass in response to my challenge to give even one counter-example of FI higher than 500 bits in non designed, non biological objects. This is an important point, because the simple fact that no example exists of high FI in non designed and non biological objects, in all the known universe, is absolutely true, and is one of the important empirical foundations of ID theory. But of course, I had underestimated the zeal of my interlocutors in stating the most irrationl things to make their wrong points. I want to clarify here that I am not saying that about the tornado example, which is wrong but has some reason to be discussed in this context. I will discuss that special case in a later comment. I am referring here to the examples provided by Swamidass, which are beyond any possible excuse. Now, I am not really convinced that he really believes the things he has stated about this point. Knowing that he is an intelligent and competent person, it is difficult for me to believe that. Maybe it was only stratefy. Not good, anyway. However, I have to accept his position as it was expressed. And his position is, as said, beyond excuse. He has given 4 examples. I have considered only the first, because the other ones are probably the same thing. The starry sky. According to Swamidass, the starry sky is a case of extremely high FI. Why? you may ask. And you are perfectly justified in asking why, because there is apparently no reason to think that. The answer: because it implements many functions, such as navigation, orientation, and myth telling. Yes, exactly that. Now, I have answered that in some detail with the arguments that you can find here at #470 and, more in detail, #492. A correct definition of function for the starry sky shows that it has functions, but that they have extremely low FI, because practically any random configuration, except for those highly ordered, can implement them efficiently (navigation etc.). Not happy with that, Swamidass precises that his function is not about any configuration that can implement navigation or myths (the only correct way to define that function independently), but that he wants exactly the configuration we observe, the myths we have, and so on. Of course, just after I had cautioned him that, as any reasoning person would understand without any need of being cautioned, using the observed bits to build an ad hoc function is the worst logical fallacy one can imagine. But that is exactly what he has done. Asking: "I want exactly this configuration" is meaningless, from the point of view of FI. That is drawing the target after having made the shot, and drawing it where no target at all existed before (does that remind you of something? :) ) With this logic, each deck of cards would be an example of high FI. Ah, but I suppose that is exactly what our interlocutors want to believe. FI everywhere, which is the same, after all, of having FI nowhere. So, I answered that with a rather hot comment, that you can find at #507. Including a confutation of Swamidass' false accusation to me of having broken my rules using the bits to define functions. Something that I have never done. I am not aware of any direct answer to that, ot to the final direct questions I offered there. So, this is the first serious error: FI does not abound in that type of systems, which could include winds, clouds, the form of continents, and whatever. It is, indeed, almost absent there. Of course we use the form of continents to navigate. But we would do the same if the form were different. Practically any form of the continents can and must be used to navigate efficiently (except maybe those incompatible with navigation. So, FI almost zero, in all those cases. That's it.gpuccio
September 2, 2019
September
09
Sep
2
02
2019
05:26 AM
5
05
26
AM
PST
GP:
I am also preparing some discussion about the immune system, mainly because the subject deserves an update, and also to answer some very wrong statements made at PS about that. I am not sure, but maybe that will go onto a new OP.
Excellent! Thanks! I look forward to reading more OPs from you. Also, Bill Cole -who did a nice work coordinating your teaching mission at PS- just mentioned a possible joint work with professor Behe? Did I get that right? Perhaps a new book is in the oven? :)OLV
September 2, 2019
September
09
Sep
2
02
2019
05:02 AM
5
05
02
AM
PST
OLV: "which to me personally seemed frustrating." It was, from many points of view. But some things were good. I have no regrets. :)gpuccio
September 2, 2019
September
09
Sep
2
02
2019
04:48 AM
4
04
48
AM
PST
GP, Glad to know you rested. Also glad to have you back after your stressful teaching mission at PS, which to me personally sometimes seemed frustrating because the audience did not look very interested in learning. However, your clear message could make some of your interlocutors there to think more seriously about what you told them. Perhaps some of them could even be persuaded to take ID more seriously in the near future? :) It’s encouraging to know that you’re planning a new OP. ThanksOLV
September 2, 2019
September
09
Sep
2
02
2019
04:42 AM
4
04
42
AM
PST
To all here: Hi guys, thank you for your interventions here. I have taken a little rest. Deserved, I dare to say! :) OK, discussing with the people at PS is rather frustrating, especially when the discussion, after an initial appearnce of scientific detachment, took the definite tone of desperate denial. My opinion, of course. So, I think that I will do the following: I will sum up here some of the major differences between my thoughts (and probably those of most people here) and those of most people at PS. As I see them, of course. I am firmly convinced that they are really wrong on those points, and I will explain again briefly the reasons for that. That will include a brief discussion about tornadoes, which is probably the single interesting example made by them (indeed, by Arthur Hunt, who has been, to be frank, probably the best inbterlocutor there. Always in my opinion). I am also preparing some discussion about the immune system, mainly because the subject deserves an update, and also to answer some very wrong statements made at PS about that. I am not sure, but maybe that will go onto a new OP. This work will be done here, in relative peace. Then I will see what is the best way to share it with those at PS, too. But, for the moment, I am writing for you, my friends! :)gpuccio
September 2, 2019
September
09
Sep
2
02
2019
04:25 AM
4
04
25
AM
PST
Bill Cole, Thanks for answering my questions.PeterA
September 1, 2019
September
09
Sep
1
01
2019
04:50 PM
4
04
50
PM
PST
Peter
Did professor JS respond to GP’s comment posted here @554? BTW, you did a nice work promoting and encouraging this interesting debate between GP and the folks at PS. Thanks. PS. It’s not your fault that the PS folks approached it in such a fuzzy manner. You did your part well.
-A few back and forth by Steve and Joshua this AM. -I think Gpuccio's work has some real potential. Mike Behe read the initial exchange and agrees. I think the merger of Gpuccio's ideas and Mikes has interesting possibilities. In the discussion above Gpuccio claims that wait time is about the same as FI and I agree for an irreducibly complex system. -Thanks for your kind words. I think the evolutionary position and population genetics are in trouble as the problem Gpuccio is surfacing in real, he is proposing a real measurement and a test, and the idea of adding bits of different functions to calculate FI violates Hazen's and Szostak's definition of functional information.bill cole
September 1, 2019
September
09
Sep
1
01
2019
09:19 AM
9
09
19
AM
PST
Hi Peter
I would like to commend you for broaching this subject, as it stands in contrast to the approaches used by the ID vanguard. I have long been of the opinion that the relevant metric that ID proponents should be measuring is something akin to informational work, which may be like what you describe here. broaching this subject? What subject? the approaches used by the ID vanguard? Huh? the relevant metric that ID proponents should be measuring? Huh?
I will answer the best I can. 1. The subject is the Intelligent design theory. 2. No trying to disqualify evolution by a single protein analysis as some did with Axe's work.https://pandasthumb.org/archives/2007/01/92-second-st-fa.html 3. Measuring information. He is interested in Gpuccio's method to explore as a possibility.bill cole
September 1, 2019
September
09
Sep
1
01
2019
09:09 AM
9
09
09
AM
PST
Out of one side of his mouth Joshua insists he isn't a Neo-Darwinist. And yet his words and actions say that he is. It is a safe bet he doesn't know what then term means as Nathan Lents, Lenski, Dawkins, Coyne, et al, are all Neo-Darwinists. It's sad watching him live a lie...ET
September 1, 2019
September
09
Sep
1
01
2019
08:27 AM
8
08
27
AM
PST
GP @554:
why don’t you try to make some analysis of that type, and let’s see the results? I am ready to consider them.
Excellent suggestion. Let’s see how he responds to this. PS. What is EVD?PeterA
September 1, 2019
September
09
Sep
1
01
2019
06:22 AM
6
06
22
AM
PST
Bill Cole, Did professor JS respond to GP’s comment posted here @554? BTW, you did a nice work promoting and encouraging this interesting debate between GP and the folks at PS. Thanks. PS. It’s not your fault that the PS folks approached it in such a fuzzy manner. You did your part well.PeterA
September 1, 2019
September
09
Sep
1
01
2019
06:19 AM
6
06
19
AM
PST
Bill Cole, I didn’t understand what you wrote @552. Perhaps my question wasn’t clear enough. Let me try it differently: Did you understand what Art Hunt meant by the below quoted comment (specially the highlighted text)?
I would like to commend you for broaching this subject, as it stands in contrast to the approaches used by the ID vanguard. I have long been of the opinion that the relevant metric that ID proponents should be measuring is something akin to informational work, which may be like what you describe here.
broaching this subject? What subject? the approaches used by the ID vanguard? Huh? the relevant metric that ID proponents should be measuring? Huh? Emphasis added. Thanks.PeterA
September 1, 2019
September
09
Sep
1
01
2019
04:51 AM
4
04
51
AM
PST
Us lurkers are very much thankful to you, gpuccio. This has been most interesting and clarifying to my own understanding. And thanks to the others involved too. Keep up the great work.mike1962
August 31, 2019
August
08
Aug
31
31
2019
07:28 PM
7
07
28
PM
PST
I was just saying what they already have been saying.ET
August 31, 2019
August
08
Aug
31
31
2019
05:13 PM
5
05
13
PM
PST
ET: There is no Sharp Shooter Fallacy. The functions exist independently and objectively, we are not inventing them. Period.gpuccio
August 31, 2019
August
08
Aug
31
31
2019
05:08 PM
5
05
08
PM
PST
OK, even if you could get a new gene by chance- that is a gene with a start codon, stop codon, a nucleotide sequence between them and a binding site- if it doesn't code for the right sequence of amino acids it won't fold. And even if it does fold that isn't any guarantee it will be functional. That "sharp shooter fallacy" may not be anything of the kind. The sad part is they think that because there wasn't a target, it somehow, magically, makes it even odds of happening. "Oh, there wasn't any target, you IDiot. It all just happened. And we know that because there it is. The odds of you being you are ginormous and yet here you are- your ancestors could have never figured the odds of you being here and here you are. So your probability arguments are ignorant." That is the PS POV summary of eventsET
August 31, 2019
August
08
Aug
31
31
2019
04:11 PM
4
04
11
PM
PST
Swamidass at PS:
In a non-decomposible system (1 safe, with a 100-bit combination), the wait time is 2^100.
OK, I will try to simplify this point. FI, if correctly understood and applied, is related to the wait time. More or less as 2^FI The point is, FI is the number of bits necessary to implement one well defined function. Without those bits, the function simply does not exist. That means that the function is treated as non decomposable. Therefore, the wait time is approximately 2^FI. Therefore, FI, used correctly, expresses the probability of finding the function in a purely random system, if no necessity intervention, like NS, intervenes. That is the purpose of FI. That is the reason it is useful. Now, if the function can be demonstrated to be decomposable, FI must be analyzed taking into account the decomposition. Which, in a biological context, means the effects of NS. It is not true that decomposition of a function has nothing to do with selection. In the case of the samlle safes, the wait time is very short because the simpler functions are recognized as such (the safe opens, and the thief gets the money. In a biological system, that means that the simpler function must work so that it can be recognized and in some way selected. Otherwise, those simpler functions would not change at all the probability of the final result, or the wait time. If the thief had to try all possible combinations of 0 and 1 for the 100 safes, and become aware that something has happened only when all 10 safes are open, then the problem would be exactly the same as with the big safe. So, intermediate function is always a form of selection and as such it should be treated. So, any intermediate function that has any influence of the wait time has also the effect of lowering the FI, if correctly taken into consideration. Moreover, a function must be a function. some definite task that we can accomplish with the object. The simple existence of 10, or 100, simpler functions is not a new function. Not from the point of view of FI as it must be correctly conceived and applied. The correct application of FI is the computation of the bits necessary to implement a function, a function that does not exist without all those bits, and which is not the simple co-existence of simpler functions. IOWs, there must be no evidence that the function can be decomposed into simpler function. That said, 10 objects having 50 bits of FI do not mean 500 bits of FI. And the wait time for a complex function, if FI is correctly applied, is more or less 2^FI. If you want to conceive and apply FI differently, and apply it to co-existing and unrelated simpler functions, or to functions that can be proved to be decomposable, you are free to do as you like. But your application of the concept, of course, will not work, and it will be impossible to use it for a design inference. Which is probably, your purpose. But not mine. So, if you insist that FI is everywhere in tons, in the starry sky, in the clouds, maybe even in the grains of sands of a beach, you are free to think that way. Of course, that FI is useless. But it is your FI, not mine. And if you insist that the 100 safes and the big safe have the same FI, and that therefore FI is not a measure of the probability and of the wait time, you are free to think that way. Of course, that type of FI will be completely useless. But again, it is your FI, not mine. I believe that FI, correctly understood and used, is a precious tool. That’s why I try to use it well. Regarding the EVD, I am not convinced. However, if you think that such an analysis is better than the one performed by the binomial distribution, which seems to me the natural model for bynary outcomes of success and failure, why don’t you try to make some analysis of that type, and let’s see the results? I am ready to consider them. The objection of paralelism in some measure I understand. But you must remember that I have computed the available attempts of the biological system as the total number of different genomes that can be reached in the whole life of our planet. And it is about 140 bits, after a very generous gross estimate of the higher threshold. So, the simple fact here is: we are dealing (always for a pure random system) with at most, at the very most, 140 bits of possible attempts everywhere, in problems that have, in most cases, values of FI much higher than 500 bits for proteins for which no decomposition has ever be shown. Why should parallelism be a problem? Considering all the possible paralllel attempts in all existing organisms of all time, we are still at about 140 bits. OK, I am tired now. Again, excuse me, I will probably have to slow down my interventions. I will do what I can. I would like to deal, is possible, with the immune system model, because it is very interesting. Indeed, I have dedicated a whole OP to that, some time ago. Antibody Affinity Maturation As An Engineering Process (And Other Things) And I think that this too is pertinent: Natural Selection Vs Artificial Selection And, of course, tornadoes, tornadoes… :) Ah, and excuse me if I have called you, and your friends, neo-darwinist. I tend to use the expression in a very large sense. I apologize if you don’t recognize yourself in those words. From now on, at least here, I will use the clearer term: “believer in a non designed origin of all biological objects”. Which, while a little bit long, should designate more unequivocally the persons I have come here to confront myself with. Including, I suppose, you.gpuccio
August 31, 2019
August
08
Aug
31
31
2019
03:40 PM
3
03
40
PM
PST
GP @550: “Oh, it seems that the thread at PS is going to close in one day.” If they don’t do something to improve their Alexa Global Internet Traffic Ranking position, they might have to close the entire website. UD 631,311 627,114 612,722 602,627 602,965 UP 191 K 578 Total Sites Linking In PT 1,732,931 1,736,969 1,743,372 1,592,453 1,628,896 DN 150 K 950 Total Sites Linking In TSZ 3,215,461 3,222,145 3,226,071 3,228,639 3,323,453 DN 830 K 37 Total Sites Linking In PS 7,036,059 7,051,188 7,059,655 7,064,442 7,067,236 DN 3.7 M 12 Total Sites Linking Injawa
August 31, 2019
August
08
Aug
31
31
2019
01:58 PM
1
01
58
PM
PST
Hi Peter
Do you understand the text quoted @524?
He is raising an issue with Gpuccio's method of calculating FI. There was nothing new gained regarding known strengths and weakness of the method. At the end of the day since sequences observed are so long there is almost no window for RMNS to work. Gpuccio's results are shedding great doubt if that window exists at all.bill cole
August 31, 2019
August
08
Aug
31
31
2019
01:48 PM
1
01
48
PM
PST
GPuccio @550:
Oh, it seems that the thread at PS is going to close in one day.
Time for a break? :)
May be they are sure they have reached something final.
Good for them. :)
OK, as I still have many things to say, in case I will continue here.
Welcome back! Thanks! Good for us here!PeterA
August 31, 2019
August
08
Aug
31
31
2019
01:40 PM
1
01
40
PM
PST
To all here: Oh, it seems that the thread at PS is going to close in one day. May be they are sure they have reached something final. OK, as I still have many things to say, in case I will continue here. :)gpuccio
August 31, 2019
August
08
Aug
31
31
2019
12:06 PM
12
12
06
PM
PST
To all here: This is my comment at PS about the question of probabilities. I still have to discuss the specific case of antibody maturation in the immune system. gpuccio (quote): "Let’s state things clearly: 10 objects with 50 bits of FI each are not, in any way, 500 bits of FI." Swamidass:
This is a new one. Probabilities are multiplicative, so information is additive. Information is the log of a probability. So yes, 10 objects with 50 bits of FI each are exactly 500 bits of FI..
OK, let’s clarify this. 10 objects with 50 bits of FI each are not, in any way, 500 bits of FI. 10 objects with 50 bits of FI each are 500 bits of FI only if those 10 exact objects are needed to give some new defined function. Let’s see the difference. Let’s say that there is a number of possible functions in a genome that have, each of them, 50 bits of FI. Let’s call the acquisition of the necessary information to get one of those functions “a success”. These functions are the small safes in my example. The probability of getting a success in one attempt is, of course, 1:2^50. How many attempts are necessary to get at least one success? This can be computed using the binomial distribution. The result is that with 2^49 attempts we have a more than decent probability (0.3934693) of getting at least one success. How many attempts are necessary to have a decent probability of gettin at least 10 successes, each of them with that probability of success, each of them with 50 bits of FI? Again, we use the binomial distribution. The result is that with 2^53 attempts (about 16 times, 4 bits, the number of attempts used before) we get more or less the same probability: 0.2833757 That means that the probability of getting 10 successes is about 4 bits lower than the probability of getting one success. The FI of the combined events is therefore about 54 bits. Why is that? Why do probabilities not multiply, as you expect? It’s because the 10 events, while having 50 bits of FI each, are not generating a more complex function. They are individual successes, and there is no relationship between them. That’s why the statement: 10 objects with 50 bits of FI each are not, in any way, 500 bits of FI. is perfectly correct. Those ten objects have 500 bits of FI only if, together, they, and only they, can generate a new function. In terms of the safes, solving the 100 keys to the samell functions generates 100 objects, each with 1 bit of FI. But finding those 100 objects does not generate in any way 100 bits of FI, because the 100 functional values found by the thief have no relationship at all with the 100 bit sequence that is the solution for the big safe. I hope that is clear. We can rather easily find a number of functions with lower FI, but their FI cannot be summed, unless those functions are the only components that can generate a new function, a function that needs all of them exactly as they are. Please, give me feedback on this point, before I start examining the example of affinity maturation in the immune system. This is not only to Swamidass, but to all those who have commented on this point. By the way, I was forgetting: using the binomial distribution, we can easily compute that the number of attempts needed to get at least one success when the probability of success if 1:2^500 (500 bits of FI) is 2^499, with a global probabilty of 0.3934693gpuccio
August 31, 2019
August
08
Aug
31
31
2019
12:04 PM
12
12
04
PM
PST
Case 2. Two independent events, each with probability p, and each event is independently usefule, so it can be retained by negative selection when found.
This is too vague to tell what the FI of this event is imo.bill cole
August 31, 2019
August
08
Aug
31
31
2019
10:32 AM
10
10
32
AM
PST
All...here is an post by Dr Swamadass for comments. swamidass S. Joshua Swamidass Confessing Scientist 26m @glipsnort that is not a well defined example. Case 1.Two independent events, each with probability p, and success requires both at same time, and there is no benefit to one alone. Case 2. Two independent events, each with probability p, and each event is independently usefule, so it can be retained by negative selection when found. Case 3. One event with probability p^2. All else being equal, perhaps with some caveats to be clarified: The FI is the same for all three cases (success at all events). Single trial success is identical in all cases: p^2, with FI 2 log p. Evolutionary wait time in Case 1 and 3 is the same: p^2, with FI 2 log p. Evolutionary wait time in Case 2 is much less: p * 2, with FI 2 log p. Case 1 is equivalent to the strictest (and known to be false) version of irreducible complexity (IC1). Even Behe acknowledges that this is not how biology works. For very good reason, modern evolutionary theory works like Case 2, which had far lower wait times than Case 1 and 3. FI does not correlated with wait time! The decomposability of the system breaks this relationship. This result does not depend on fitness landscapes at all, just random sampling (tornado in a junkyard) plus NEGATIVE selection, not Darwinistic positive selection.bill cole
August 31, 2019
August
08
Aug
31
31
2019
09:12 AM
9
09
12
AM
PST
Joshua wrote:
We have read Behe’s three books @gpuccio. We have assessed them carefully. Have you read our response to Darwin Devolves?
I have and all three of you struck out. Swamidass et al., just blindly accept any narrative against Dr Behe, even when the narrative is devoid of science. Dr. Behe didn't take their review seriously. Only the willfully ignorant did.ET
August 30, 2019
August
08
Aug
30
30
2019
07:27 PM
7
07
27
PM
PST
Bill Cole, Do you understand the text quoted @524?PeterA
August 30, 2019
August
08
Aug
30
30
2019
06:38 PM
6
06
38
PM
PST
This discussion is starting to remind me of the "Methinks it is like a weasel" problem. It's one thing to get that sentence. But in reality that sentence only has a function in one and only one literary work. It could never work in any of Mark Twain's books. It could never work in a Hemmingway novel. Could you imagine Mark Antony saying "Friends, Romans, countrymen. Methinks it is like a weasel."? HT "Disinherit the Wind"- I highly recommend anyone and everyone read that playET
August 30, 2019
August
08
Aug
30
30
2019
05:08 PM
5
05
08
PM
PST
To break this down against Hazen and Szostak's definition given Steves example we need to know: -What is the defined function? An effective antibody -What is the functional information contained in bits. 60 bits or 1e-18 chances a random sequence will solve the problem. As I see it all Steve is generating with new sequences are additional tries at hitting the target. The functional information pertaining to this anti body remains fixed at 60 bits IMO. Thoughts? If Steve had to generate two different antibodies with FI=60 bits that bound to two different lethal hosts and without success the animal would die then I think his math works. -bill cole
August 30, 2019
August
08
Aug
30
30
2019
04:58 PM
4
04
58
PM
PST
Steves latest post: Gpuccio: So, you see, 100 objects with one bit of FI each do not make 100 bits of FI. One object with 100 bits of FI is the real thing. The rest is simply an error of reasoning. Steve:
You are quite correct. My mistake was in treating the 60 bits as representing the probability of finding a particular antibody per infection rather than per B cell. In the former case, my calculation would be correct. (In the 100 safes scenario, the correct analogy would be the probability of unlocking all 100 safes by flipping a coin once as the thief encounters each safe. That probability is indeed the same as that for guessing the 100-bit combination by flipping 100 coins.) But since the 60 bits is per B cell, the probability per infection is much higher. So let’s ballpark some numbers for the real case. We’re assuming the probability of hitting on the correct antibody is ~1e-18, which is 60 bits worth. How many tries do the B cells get at mutating to hit the right antibody? Good question. There seem to be about 1e11 naive B cells in an adult human. Only a fraction of these are going to proliferate in most infections. Let’s say 10% of naive B cells each proliferate 100-fold. That give 1e12 tries at a 1e-18 target, for a probability of randomly hitting the target of 1 in a million per infection. That corresponds to ~20 bits. So each week in this scenario only contributes 20 bits of probability, not 60, and the time to reach 500 bits is 25 weeks, not 8. (Note: this 500 bits represents the same probability as hitting a 500 bit target in a single try.) If my guess of the proliferation is off by an order of magnitude, knock off a few more bits. It still takes less than a year to get to 500 bits, and a lot less than 1e38.
bill cole
August 30, 2019
August
08
Aug
30
30
2019
04:44 PM
4
04
44
PM
PST
glipsnort at PS:
Yes, your math is off by 37 orders of magnitude.
Wow, you guys in the anti-ID field seem to be really fond of this error. Let’s state things clearly: 10 objects with 50 bits of FI each are not, in any way, 500 bits of FI. Which is what Rumracket (and maybe you) seems to believe when he says: If natural selection can add 60 bits of FI in a few weeks, why can’t it add 500 bits of FI over the course of (say) 20 million years? To make things more clear, I will briefly propose again here my example of the thief and the safes, that I used some time ago to make the same point with Joe Felsenstein. It goes this way. A thief enters a building, where he finds the following objects: a) One set of 100 small safes. b) One big safe. The 100 small safes contain, each, 1/100 of the sum in the big safe. Each small safe is protected by one electronic key of one bit: it opens either with 0 or with 1. The big safe is protected by a 100 bit long electronic key. The thief does not know the keys, any of them. He can do two different things: a) Try to open the 100 small safes. b) Try to open the big safe. What would you do, if you were him? Rumracket, maybe, would say that there is no difference: the total sum is the same, and according to his reasoning (or your reasoning, maybe) we have 100 bits of FI in both cases. My compliments to your reasoning! If the thief reasoned that way, he could choose to go for the big safe, and maybe spend his whole life without succeeding. He has to find one functional combination out of 2^100 (about 10^30). Not a good perspective. On the other hand, if he goes for the small safes, he can open one in, what? one minute? Probably less. Even giving one more minute to take the cash, he would probably be out and rich after a few hours of honest work! :slight_smile: So, you see, 100 objects with one bit of FI each do not make 100 bits of FI. One object with 100 bits of FI is the real thing. The rest is simply an error of reasoning.gpuccio
August 30, 2019
August
08
Aug
30
30
2019
11:17 AM
11
11
17
AM
PST
1 5 6 7 8 9 25

Leave a Reply