Uncommon Descent Serving The Intelligent Design Community

Controlling the waves of dynamic, far from equilibrium states: the NF-kB system of transcription regulation.

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

I have recently commented on another thread:

about a paper that (very correctly) describes cells as dynamic, far from equilibrium systems, rather than as “traditional” machines.

That is true. But, of course, the cell implements the same functions as complex machines do, and much more. My simple point is that, to do that, you need much greater functional complexity than you need to realize a conventional machine.

IOWs, dynamic, far from equilibrium systems that can be as successful as a conventional machine, or more, must certainly be incredibly complex and amazing systems, systems that defy everything else that we already know and that we can conceive. They must not only implement their functional purposes, but they must do that by “harnessing” the constantly changing waves of change, of random noise, of improbability. I have commented on those ideas in the mentioned thread, at posts #5 and #8, and I have quoted at posts #11 and #12 a couple of interesting and pertinent papers, introducing the important concept of robustness: the ability to achieve reliable functional results in spite of random noise and disturbing variation.

In this OP, I would like to present in some detail a very interesting system that shows very well what we can understand, at present, of that kind of amazing systems.

The system I will discuss here is an old friend: it is the NF-kB system of transcription factors (nuclear factor kappa-light-chain-enhancer of activated B cells). We are speaking, therefore, of transcription regulation, a very complex topic that I have already discussed in some depth here:

I will remind here briefly that transcription regulation is the very complex process that allows cells to be completely different using the same genomic information: IOWs, each type of cell “reads” differently the genes in the common genome, and that allows the different types of cell differentiation and the different cell responses in the same cell type.

Transcription regulation relies on many different levels of control, that are summarized in the above quoted OP, but a key role is certainly played by Transcription Factors (TFs), proteins that bind DNA and act as activators or inhibitors of transcription at specific sites.

TFs are a fascinating class of proteins. There are a lot of them (1600 – 2000 in humans, almost 10% of all proteins), and they are usually medium sized proteins, about 500 AA long, containing at least one highly conserved domain, the DNA binding domain (DBD), and other, often less understood, functional components.

I quote again here a recent review about human TFs:

The Human Transcription Factors

The NK-kB system is a system of TFs. I have discussed it in some detail in the discussion following the Ubiquitin thread, but I will describe it in a more systematic way here.

In general, I will refer a lot to this very recent paper about it:

Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

The NF-kB system relies essentially on 5 different TFs (see Fig. 1 A in the paper):

  1. RelA  (551 AAs)
  2. RelB  (579 AAs)
  3. c-Rel  (619 AAs)
  4. p105/p50
    (968 AAs)
  5. p100/p52  (900 AAs)

Those 5 TFs work forming dimers, homodimers or heterodimers, for a total of 15 possible compbinations, all of which have been found to work in the cell, even if some of them are much more common.

Then there are at least 4 inhibitor proteins, collectively called IkBs.

The mechanism is apparently simple enough. The dimers are inhibited by IkBs and therefore they remain in the cytoplasm in inactive form.

When an appropriate signal arrives to the cell and is received by a membrane receptor, the inhibitor (the IkB molecule) is phosphorylated and then ubiquinated and detached from the complex. This is done by a protein complex called IKK. The free dimer can then migrate to the nucleus and localize there, where it can act as a TF, binding DNA.

This is the canonical activation pathway, summarized in Fig. 1. There is also a non canonical activation pathway, that we will not discuss for the moment.


Mechanism of NF-κB action. In this figure, the NF-κB heterodimer consisting of Rel and p50 proteins is used as an example. While in an inactivated state, NF-κB is located in the cytosol complexed with the inhibitory protein IκBα. Through the intermediacy of integral membrane receptors, a variety of extracellular signals can activate the enzyme IκB kinase (IKK). IKK, in turn, phosphorylates the IκBα protein, which results in ubiquitination, dissociation of IκBα from NF-κB, and eventual degradation of IκBα by the proteasome. The activated NF-κB is then translocated into the nucleus where it binds to specific sequences of DNA called response elements (RE). The DNA/NF-κB complex then recruits other proteins such as coactivators and RNA polymerase, which transcribe downstream DNA into mRNA. In turn, mRNA is translated into protein, resulting in a change of cell function.

Attribution: Boghog2 at English Wikipedia [Public domain]

Now, the purpose of this OP is to show, in greater detail, how this mechanism, apparently moderately simple, is indeed extremely complex and dynamic. Let’s see.

The stimuli.

First of all, we must understand what are the stimuli that, arriving to the cell membrane, are capable to activate the NF-kB system. IOWs, what are the signals that work as inputs.

The main concept is: the NF-kB system is a central pathway activated by many stimuli:

  1. Inflammation
  2. Stress
  3. Free
    radicals
  4. Infections
  5. Radiation
  6. Immune
    stimulation

IOWs, a wide variety of aggressive stimuli can activate the system

The extracellular signal arrives to the cell usually through specific cytokines, for example TNF, IL1, or through pathogen associated molecules, like bacterial lipopolysaccharides (LPS). Of course there are different and specific membrane receptors, in particular IL-1R (for IL1) , TNF-R (for TNF), and many TLRs (Toll like receptors, for pathogen associated structures). A special kind of activation is implemented, in B and T lymphocytes, by the immune activation of the specific receptors for antigen epitopes (B cell receptor, BCR, and T cell receptor, TCR).

The process through which the activated receptor can activate the NF-kB dimer is rather complex: it involves, in the canonical pathway, a macromolecular complex called IKK (IkB kinase) complex, comprising two catalytic kinase subunits (IKKa and IKKb) and a regulatory protein (IKKg/NEMO), and involving in multiple and complex ways the ubiquitin system. The non canonical pathway is a variation of that. Finally, a specific protein complex (CBM complex or CBM signalosome) mediates the transmission from the immune BCR or TCR to the canonical pathway. See Fig. 2:

From: NF-κB Activation in Lymphoid Malignancies: Genetics, Signaling, and Targeted Therapy – Scientific Figure on ResearchGate. Available from: https://www.researchgate.net/figure/Increased-activity-of-the-CARMA1-BCL10-MALT1-signalosome-drives-constitutive-NF-kB_fig2_324089636 [accessed 10 Jul, 2019]
Figure 3 – NF-κB Activation in Lymphoid Malignancies: Genetics, Signaling, and Targeted Therapy
available via license: Creative Commons Attribution 4.0 International

I will not go into further details about this part, but those interested can have a look at this very good paper:

TLR-4, IL-1R and TNF-R signaling to NF-kB: variations on a common theme

In particular, Figg. 1, 2, 3.

In the end, as a result of the activation process, the IkB inhibitor is degraded by the ubiquitin system, and the NK-kB dimer is free to migrate to the nucleus.

An important concept is that this is a “rapid-acting” response system, because the dimers are already present, in inactive form, in the cytoplasm, and must not be synthesized de novo: so the system is ready to respond to the activating signal.

The response.

But what is the cellular response?

Again, there are multiple and complex possible responses.

Essentially, this system is a major regulator of innate and adaptive immune responses. As such, it has a central role in the regulation of inflammation, in immunity, in autoimmune processes, and in cancer.

Moreover, the NF-kB system is rather ubiquitous, and is present and active in many different cell types. And, as we have seen, it can be activated by different stimuli, in different ways.

So, the important point is that the response to activation must be (at least):

  1. Lineage-specific
  2. Stimulus-specific

IOWs, different cells must be able to respond differently, and each cell type must respond differently to different stimuli. That gives a wide range of possible gene expression patterns at the transcription level.

The following paper is a good review of the topic:

Selectivity of the NF-κB Response

For example, IL2 is induced by NF-kB activayion in T cells, but not in B cells (lineage specific response). Moreover, specific cell types can undergo specific, and often different, cell destinies after NF-kB activation: for example, NK-kB is strongly involved in the control and regulation of T and B cell development.

From:

30 years of NF-κB: a blossoming of relevance to human pathobiology

“B and T lymphocytes induce NF-κB in adaptive immune responses through the CARD11:Bcl10:MALT1 (CBM) complex (Hayden and Ghosh, 2008). Newly expressed genes promote lymphocyte proliferation and specific immune functions including antibody production by B cells and the generation of cytokines and other anti-pathogen responses by T cells.”

And, in the same cell type, certain promoters regulated by NF-kB require additional signaling (for example,  in human dendritic cells promoters for Il6Il12b, and MCP-1 require additional p38 histone phosphorylation to be activated), while others can be activated directly (stimulus-specific response).

So, to sum up:

  1. A variety of stimuli can activate the system in different ways
  2. The system itself has its complexity (different dimers)
  3. The response can be widely different, according to the cell type where it happens, and to the type of stimuli that have activated the system, and probably according to other complex variables.
  4. The possible responses include a wide range of regulations of inflammation, of the immune system, of cell specifications or modifications, and so on.

How does it work?

So, what do we know about the working of such a system?

I will ignore, for the moment, the many complexities of the activation pathways, both canonical and non canonical, the role of cyotkines and receptors and IKK complexes, the many facets of NEMO and of the involvement of the ubiquitin system.

For simplicity, we will start with the activated system: the IkB inhibitor has been released from the inactive complex in the cytoplasm, and some form of NF-kB dimer is ready to migrate to the nucleus.

Let’s remember that the purpose of this OP is to show that the system works as a dynamic, far from equilibrium system, rather than as a “traditional” machine. And that such a way to work is an even more amazing example of design and functional complexity.

To do that; I will rely mainly on the recent paper quoted at the beginning:

Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

The paper is essentially about the NF-kB Target Selection Puzzle. IOWs, it tries to analyze what we know about the specificity of the response. How are specific patterns of transcription achieved after the activation of the system? What mechanisms allow the selection of the right genes to be transcribed (the targets) to implement the specific patterns according to cell type, context, and type of stimuli?

A “traditional” view of the system as a machine would try to establish rather fixed connections. For example, some type of dimer is connected to specific stimuli, and evokes specific gene patterns. Or some other components modulate the effect of NK-kB, generate diversification and specificity of the response.

Well, those ideas are not completely wrong. In a sense, the system does work also that way. Dimer specificity has a role. Other components have a role. In a sense, but only in a sense, the system works as though it were a traditional machine, and uses some of the mechanisms that we find in the concept of a traditional biological machine.

But that is only a tiny part of the real thing.

The real thing is that the system really works as a dynamic, far from equilibrium system, harnessing huge random/stochastic components to achieve robustness and complexity and flexibility of behavior in spite of all those non finalistic parts.

Let’s see how that happens, at least for the limited understanding we have of it. It is important to consider that this is a system that has been studied a lot, for decades, because of its central role in so many physiological and pathological contexts, and so we know many things. But still, our understanding is very limited, as you will see.

So, let’s go back to the paper. I will try to summarize as simply as possible the main concepts. Anyone who is really interested can refer to the paper itself.

Essentially, the paper analyzes three important and different aspects that contribute to the selection of targets at the genomic level by our TFs (IOWs, our NF-kB dimers, ready to migrate to the nucleus. As the title itself summarizes, they are:

  1. Abundance
  2. Affinity
  3. Binding site availability

1. Abundance

Abundance is referred here to two different variables: abundance of NF-kB Binding Sites in the genome and abundance of Nucleus-Localized NF-kB Dimers. Let’s consider them separately.

1a) Abundance of NF-kB Binding Sites in the genome:

It is well known that TFs bind specific sites in the genome. For NF-kB TFs, the following consensus kB site pattern has been found:

 5′-GGGRNWYYCC-3′

where R, W, Y, and N, respectively denote purine, adenine or thymine, pyrimidine, and any nucleotide.

That simply means that any sequence corresponding to that pattern in the genome can, in principle, bind NF-kB dimers.

So the problem is: how many such sequences do exist in the human genome?

Well, a study based on RelA has evaluated about 10^4 consensus sequences in the whole genome, but as NF-kB dimers seem to bind even incomplete consensus sites, the total number of potential binding sites could be nearer to 10^6

1b) Abundance of Nucleus-Localized NF-kB Dimers:

An estimate of the abundance of dimers in the nucleus after activation of the system is that about 1.5 × 10^5 molecules can be found, but again that is derived from studies about RelA only. Moreover, the number of molecules and type of dimer can probably vary much according to cell type.

So, the crucial variable, that is the ratio between binding sites and available dimers, and which could help undertsand the rate of sites saturation in the nucleus, remains rather undecided, and it seems very likely that it can vary a lot in different circumstances.

But there is another very interesting aspect about the concentration of dimers in the nucleus. According to some studies, NF-kB seems to generate oscillations of its nuclear content in some cell types, and those oscillation can be a way to generate specific transcription patterns:

NF-kB oscillations translate into functionally related patterns of gene expression

For example, this very recent paper :

NF-κB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration

shows at Fig. 3 the occupancy curve of binding sites at nuclear level after NF-kB activation in two different cell types.

In fibroblasts, the curve is a periodic oscillation, with a frequency that varies according to various factors, and translates into different transcription scenarios accordingly:

Gene expression dynamics scale with the period (g1) and amplitude (g2) of these oscillations, which are influenced by variables such as signal strength, duration, and receptor identity.


In macrophages, instead, the curve is rather:

a single, strong nuclear translocation event which persists for as long as the stimulus remains and tends to remain above baseline for an extended period of time.

In this case, the type of transcription will be probably regulated by the are under the curve, ratehr than by the period and amplitude of the oscialltions, as happened in fibroblasts.

Interestingly, while in previous studies it seemed that the concentration of nuclear dimers could be sufficient to saturate most or all binding sites, that has been found not to be the case in more recent studies. Again from the paper about abundance:

in fact, this lack of saturation of the system is necessary to generate stimulus- and cell-type specific gene expression profiles

Moreover, the binding itself seems to be rather short-lived:

Interestingly, it is now thought that most functional NF-kB interactions with chromatin—interactions that lead to a change in transcription—are fleeting… a subsequent study using FRAP in live cells expressing RelA-GFP showed that most RelA-DNA interactions are actually quite dynamic, with half-lives of a few seconds… Indeed, a recent study used single-molecule tracking of individual Halo-tagged RelA molecules in live cells to show that the majority (∼96%) of RelA undergoes short-lived interactions lasting on average ∼0.5 s, while just ∼4% of RelA molecules form more stable complexes with a lifetime of ∼4 s.

2. Affinity

Affinity of dimers for DNA sequences is not a clear cut matter. From the paper:

Biochemical DNA binding studies of a wide variety of 9–12 base-pair sequences have revealed that different NF-kB dimers bind far more sequences than previously thought, with different dimer species exhibiting specific but overlapping affinities for consensus and non-consensus kB site sequences.

IOWs, we have different dimers (15 different types) binding with varying affinity different DNA sequences (starting from the classical consensus sequence, but including also incomplete sequences). Remember that those sequences are rather short (the consensus sequence is 10 nucleotides long), and that there are thousands of such sequences in the genome.

Moreover, different bindings can affect transcription differently. Again, from the paper:

How might different consensus kB sites modulate the activity of the NF-kB dimers? Structure-function studies have shown that binding to different consensus kB sites can alter the conformation of the bound NF-kB dimers, thus dictating dimer function When an NF-kB dimer interacts with a DNA sequence, side chains of the amino  acids located in the DNA-binding domains of dimers contact the bases exposed in the groove of the DNA. For different consensus kB site sequences different bases are exposed in this groove, and NF-kB seems to alter its conformation to maximize interactions with the DNA and maintain high binding affinity. Changes in conformation may in turn impact NF-kB binding to co-regulators of transcription, whether these are activating or inhibitory, to specify the strength and dynamics of the transcriptional response. These findings again highlight how the huge array of kB binding site sequences must play a key role in modulating the transcription of target genes.

Quite a complex scenario, I would say!

But there is more:

Finally, as an additional layer of dimer and sequence-specific regulation, each of the subunits can be phosphorylated at multiple sites with, depending on the site, effects on nearly every step of NF-kB activation.

IOWs, the 15 dimers we have mentioned can be phosphorylated in many different ways, and that changes their binding affinities and their effects on transcription.

This section of the paper ends with a very interesting statement:

Overall, when considering the various ways in which NF-kB dimer abundances and their affinity for DNA can be modulated, it becomes clear that with these multiple cascading effects, small differences in consensus kB site sequences and small a priori differences in interaction affinities can ultimately have a large impact on the transcriptional response to NF-kB pathway activation.

Emphasis mine.

This is interesting, because in some way it seems to suggest that the whole system acts like a chaotic system, at least at some basic level. IOWs, small initial differences, maybe even random noise, can potentially affect deeply the general working of the whole systems.

Unless, of course, there is some higher, powerful level of control.

3. Availability of high affinity kB binding sequences

We have seen that there is a great abundance and variety of binding sequences for NF-kB dimers in the human genome. But, of course, those sequences are not necessarily available. Different cell types will have a different scenario of binding sites availability.

Why?

Because, as we know, the genome and chromatin are a very dynamic system, that can exist in many different states, continuosly changing in different cell types and, in the same cell type, in different conditions..

We know rather well the many levels of control that affect DNA and chromatin state. In brief, they are essentially:

  1. DNA methylation
  2. Histone modifications (methylation, acetylation, etc)
  3. Chromatin modifications
  4. Higher levels of organization, including nuclear localization and TADs (Topologically Associating Domains)

For example, from the paper:

The promoter regions of early response genes have abundant histone acetylation or trimethylation prior to stimulation [e.g., H3K27ac, (67) and H4K20me3, (66)], a chromatin state “poised” for immediate activation…  In contrast, promoters of late genes often have hypo-acetylated histones, requiring conformational changes to the chromatin to become accessible. They are therefore unable to recruit NF-kB for up to several hours after stimulation (68), due to the slow process of chromatin remodeling.

We must remember that each wave of NK-kB activation translates into the modified transcription of a lot of different genes at the genome level. It is therefore extremely important to consider what genes are available (IOWs, their promoters can be reached by the NF-kB signal) in each cell type and cell state.

The paper concludes:

Taken together, chromatin state and chromatin organization strongly influence the selection of DNA binding sites by NF-kB dimers and, most likely, the selection of the target genes that are regulated by these protein-DNA interaction events. Analyses that consider binding events in the context of three-dimensional nuclear organization and chromatin composition will be required to generate a more accurate view of the ways in which NF-kBDNA binding affects gene transcription.

This is the main scenario. But there are other components, that I have not considered in detail for the sake of brevity, for example competition between NF-kB dimers and the complex role and intervention of other co-regulators of transcription.

Does the system work?

But does the system work?

Of course it does. It is a central regulator, as we have said, of many extremely important biological processes, above all immunity. This is the system that decides how immune cells, T and B lymphocytes, have to behave, in terms of cell destiny and cell state. It is of huge relevance in all inflammatory responses, and in our defense against infections. It works, it works very well.

And what happens if it does not work properly?

Of course, like all very complex systems, errors can happen. Those interested can have a look at this recent paper:

30 years of NF-κB: a blossoming of relevance to human pathobiology

First of all, many serious genetic diseases have been linked to mutations in genes involved in the system. You can find a list in Table 1 of the above paper. Among them, for example, some forms of SCID, Severe combined immunodeficiency, one of the most severe genetic diseases of the immune system.

But, of course, a dysfunction of the NF-kB system has a very important role also in autoimmune diseases and in cancer.

Conclusions.

So, let’s try to sum up what we have seen here in the light of the original statement about biological systems that “are not machines”.

The NF-kB system is a perfect example. Even if we still understand very little of how it works, it is rather obvious that it is not a traditional machine.

A traditional machine would work differently. The signal would be transmitted from the membrane to the nucleus in the simplest possible way, without ambiguities and diversions. The Transcription Factor, once activated, would bind, at the level of the genome, very specific sites, each of them corresponding to a definite cascade of specific genes. The result would be clear cut, almost mechanical. Like a watch.

But that’s not the way things happen. There are myriads of variations, of ambiguities, of stochastic components.

The signal arrives to the membrane in multiple ways, very different one from the other: IL1, IL17, TNF, bacterial LPS, and immune activation of the B cell receptor (BCR) or the T cell receptor (TCR) are all possible signals.

The signal is translated to the NF-kB proteins in very different ways: canonical or non canonical activation, involving complex protein structures such as:

The CBM signalosome, intermediate between immune activation of BCR or TCR and canonical activation of the NF-kB. This complex is made of at least three proteins, CARD11, Bcl10 and MALT1.

The IKK complex in canonical activation: this is made of three proteins, IKK alpha, IKK beta, and NEMO. Its purpose is to phosphorylate the IkB, the inhibitor of the dimers, so that it can be ubiquinated and released from the dimer. Then the dimer can relocate to the nucleus.

Non canonical pathway: it involves the following phosphorylation cascade: NIK -> IKK alpha dimer -> Relb – p100 dimer -> Relb – p50 dimer (the final TF). It operates during the development of lymphoid organs and is responsible for the generation of B and T lymphocytes.

Different kinds of activated dimers relocate to the nucleus.

Different dimers, in varying abundance, interact with many different binding sites: complete or incomplete consensus sites, and probably others. The interaction is usually brief, and it can generate an oscillating pattern, or a more stable pattern

Completely different sets of genes are transcribed in different cell types and in different contexts, because of the interaction of NF-kB TFs with their promoters.

Many other factors and systems contribute to the final result.

The chromatin state of the cell at the moment of the NF-kB activation is essential to determine the accessibility of different binding sites, and therefore the final transcription pattern.

All these events and interactions are quick, unstable, far from equilibrium. A lot of possible random noise is involved.

In spite of that amazing complexity and potential stochastic nature of the system, reliable transcripion regulation and results are obtained in most cases. Those results are essential to immune cell differentiation, immune response, both innate and adaptive, inflammation, apoptosis, and many other crucial cellular processes.

So, let’s go back to our initial question.

Is this the working of a machine?

Of course it is! Because the results are purposeful, reasonably robust and reliable, and govern a lot of complex processes with remarkable elegance and efficiency.

But certainly, it is not a traditional machine. It is a lot more complex. It is a lot more beautiful and flexible.

It works with biological realities and not with transistors and switches. And biological realities are, by definition, far from equilibrium states, improbable forms of order that must continuously recreate themselves, fighting against the thermodynamic disorder and the intrinsic random noise that should apparently dominate any such scenario.

It is more similar to a set of extremely clever surfers who succeed in performing elegant and functional figures and motions in spite of the huge contrasting waves.

It is, from all points of view, amazing.

Now, Paley was absolutely right. No traditional machine, like a watch, could ever originate without design.

And if that is true of a watch, with its rather simple and fixed mechanisms, how much truer it must be for a system like NF-kB? Or, for that, like any cellular complex system?

Do you still have any doubts?

Added graphic: The evolutionary history, in terms of human conserved information, of the three proteins in the CBM signalosome.
On the y axis, homologies with the human protein as bits per aminoacid (bpa). On the x axis, approximate time of appearance in million of years.
The graphic shows the big information jump in vertebrates for all three protens , especially CARD11.


Added graphic: two very different proteins and their functional history


Added graphic (for Bill Cole). Functional history of Prp8, collagen, p53.
Comments
These people @ PS will NEVER put up any methodology that shows blind and mindless processes can produce 500 bits of FI. All they will do is continue to baldly claim that it can. Joe Felsenstein has chimed in and it has already been proven that he doesn't understand the argument. And people are taking cues from him. Sad, really...ET
August 27, 2019
August
08
Aug
27
27
2019
01:36 PM
1
01
36
PM
PST
It just gets worse:
Over millions of consecutive generations, this protein has incrementally grown larger by adding hundreds of amino acids.
What? If you add amino acids to an existing protein the chances are you are going to bury the active site. And most likely change the structure. When it gets too long it will no longer fold properly without the aid of chaperones. There isn't any evidence that proteins can grow as Rumraket suggests https://discourse.peacefulscience.org/t/comments-on-gpuccio-functional-information-methodology/7560/110ET
August 27, 2019
August
08
Aug
27
27
2019
01:33 PM
1
01
33
PM
PST
Faizal Ali,
The problem is the inclusion of the latter under the category “semantic” information is the point in question. The large majority of experts who do not accept the creationist argument also do not accept the creationist claim that the physical and chemical interactions of the molecules involved in biological processes are directly analogous to sort of semantic information involved in, say, a written novel or a computer program.
No, sorry, it is not in question. The physics of symbol systems remain the same regardless of the medium, or its origin, and your attempt to dismiss the issue as a “creationist claim” displays an embarrassing lack of knowledge regarding the recorded history of the issue. You are likely unaware of this because you have not educated yourself on the literature. Additionally, empirical science is not established by consensus; it is established by what can be demonstrated and repeated. I would think you might have been aware of this, but I could be mistaken. In any case, if you find someone who as shown that the genetic material is not rate-independent, or that the process is not irreversible, or that no distinctions need be made between laws and initial conditions, or perhaps if you find someone who has solved the measurement problem, or any of the other physical observations recorded in the physics literature regarding symbol systems over the past half century, then be bure to let us know. Until then, I plan to stick with the science. You are free to continue otherwise.Upright BiPed
August 27, 2019
August
08
Aug
27
27
2019
12:27 PM
12
12
27
PM
PST
@UB 467 Letting you know iI copied your comment and posted at PS: https://discourse.peacefulscience.org/t/comments-on-gpuccio-functional-information-methodology/7560/97equate65
August 27, 2019
August
08
Aug
27
27
2019
11:47 AM
11
11
47
AM
PST
GP @474: If Dr. JS has problems understanding such a basic but fundamental concept, then your discussion at PS is really going to slow down considerably. That's why it looks like there is not so much progress, except the comment you quoted at 466, which I don't understand. Even that quoted comment could be a product of misunderstanding rather than a sign of progress in the discussion. At least that's my perception before you explain what Art Hunt meant by what he wrote in that comment you quoted @466. No need to respond this now. I can wait until you're done with your discussion at PS. Thanks.PeterA
August 27, 2019
August
08
Aug
27
27
2019
10:17 AM
10
10
17
AM
PST
GP @466: What did Art Hunt mean by this?
Thanks again, @gpuccio. I would like to commend you for broaching this subject, as it stands in contrast to the approaches used by the ID vanguard. I have long been of the opinion that the relevant metric that ID proponents should be measuring is something akin to informational work, which may be like what you describe here. I suspect that there are serious issues with the approaches one may take to estimate this property, but the concept seems to make sense to me.
Emphasis added. No need to respond this now. I can wait until you're done with your discussion at PS. Thanks.PeterA
August 27, 2019
August
08
Aug
27
27
2019
10:08 AM
10
10
08
AM
PST
Swamidass at PS:
A new configuration would not be equally precious for telling the stories we have now. We would have different constellations, and therefore different myths about these constellations. My function is to tell this specific (specified!) stories, not any old stories you might want to come up with in place of them. So no, a new conversation would break the storytelling function. Remember also, that some configurations (e.g. a regular grid or a repeating pattern) are useless for navigation or time-telling. Very quickly, we would get over 500 bits with a careful treatment, well into the thousands if not millions of bits.A new configuration would not be equally precious for telling the stories we have now. We would have different constellations, and therefore different myths about these constellations. My function is to tell this specific (specified!) stories, not any old stories you might want to come up with in place of them. So no, a new conversation would break the storytelling function. Remember also, that some configurations (e.g. a regular grid or a repeating pattern) are useless for navigation or time-telling. Very quickly, we would get over 500 bits with a careful treatment, well into the thousands if not millions of bits.
But you are doing exactly what I cautioned about. You are defining the function as a consequence of an already observed configuration. If the configuration were different, we woukld be telling different stories. Are you really so confused about the meaning of FI? The function must be defined independently You can define the function as “telling stories about the stars”. You cannot define the function as “telling storeis about the stars in this specific configuration”. How can you not understand that this is conceptually wrong?gpuccio
August 27, 2019
August
08
Aug
27
27
2019
10:03 AM
10
10
03
AM
PST
GP @ 469:
It is becoming a little difficult to paste here all the relevant posts at PS. I apologize in advance if something becones lost in translation, and maybe there are some obscure passages in the discussion. I am doing my best!
I can see how difficult that double posting must be. I suggest that those of us who are interested in following your discussion at PS just go there and read it directly, so you can have more time to concentrate on the discussion. Another option could be that someone from here does the copying/pasting of your comments from PS to here.PeterA
August 27, 2019
August
08
Aug
27
27
2019
10:02 AM
10
10
02
AM
PST
Swamidass at PS:
That last paragraph is key. Your estimate of FI seems to be, actually, FI + NE (neutral evolution), where NE is expected to be a very large number. So the real FI is some number much lower than what you calculated.
I really don’t understand. Can you please explain why neutral evolution would be part of the FI I measure? This is complete mystery to me. Neutral evolution explains the conservation of sequences? Why? I really don’t understand.gpuccio
August 27, 2019
August
08
Aug
27
27
2019
09:56 AM
9
09
56
AM
PST
For those readers interested in following GP’s discussion with PS, here are the associated post numbers: 343 Bill Cole 351 GP to Bill Cole 354 Bill Cole 356 GP to Bill Cole and PS 357 Bill Cole 360 Bill Cole 368 GP to PS 369 GP to Davecarlson 370 GP to JS 374 GP to UD 375 GP to JS 381 GP to JS 387 GP to JS 388 GP to JS 395 GP to JS 398 GP to JS 401 GP to Art Hunt 402 GP to Rumracket 406 GP to JS 408 GP to JS 411 GP to PS 416 GP to sfmatheson and JS 431 GP to JS 432 GP to JS 433 GP to JS 434 GP to glipsnort 438 GP to Art Hunt 445 GP to sfmatheson 446 GP to sfmatheson and JS 449 GP to all 451 GP to JS 461 GP to sfmatheson 462 GP to glipsnort 465 GP to JS 466 GP to Art Hunt 468 GP to JS 469 GP to all 470 GP to JS 472 GP to JS 474 GP to JS to be continued...PeterA
August 27, 2019
August
08
Aug
27
27
2019
09:55 AM
9
09
55
AM
PST
Swamidass at PS:
gpuccio: What is the object? The starry sky? You mean our galaxy, or at least the part we can observe from our planet? What is the function? Swamidass: I said the information is the positions of visible stars in the sky. The function of this information, for many thousands of years, was navigation (latitude and direction), time-telling (seasons), and storytelling (constellations). Any change that would impact navigation, time-telling, or storytelling, or create a visual difference would impact one or all these things. There are about 9,000 visible stars in the sky (low estimate). Keeping things like visual acuity in mind (Naked eye - Wikipedia), we can compute the information. However, even if there are just two possible locations in the sky for every start (absurd) and only half the stars are important (absurd), we are still at 4,500 bits of information in the position of stars in the sky. That does not even tell us the region of sky we are looking at (determined by season and latitude), but we can neglect this for now.
I wll briefly answer this, and then for the moment I must go. What makes the current configuration of the stars specific to help navigation, time telling or story telling? If the configuration were a different random configuration, wouldn’t it be equally precious for navigation, time telling and storytelling? There is no specific functional information in the configuration we observe. Most other copnfigurations generated by cosmic events would satisfy the same functions you have defined.gpuccio
August 27, 2019
August
08
Aug
27
27
2019
09:52 AM
9
09
52
AM
PST
To all here: It is becoming a little difficult to paste here all the relevant posts at PS. I apologize in advance if something becones lost in translation, and maybe there are some obscure passages in the discussion. I am doing my best! :)gpuccio
August 27, 2019
August
08
Aug
27
27
2019
09:43 AM
9
09
43
AM
PST
Swamidass at PS:
But evolution is not a random walk!! It is guided by many things, including natural selection. If you neglect selection, you are not even modeling the most fundamental basics. There are other deviations from the random walk model too. Evolution is also not all or none, but demonstrably can accumulate FI gradually in a steady process. I could go on, but you need a better model of evolution.
You are anticipating too much. Have patience. I am only saying that the correct model for the RV part of the neo-darwinian model is a random walk. For the moment, I have not considered NS or other aspects. By the way, the random walk model is also valid for neutral drift because, as sait, it is part of the RV aspect.
First, there is a difference between FI estimates by your procedure and the true FI.
As said, my estimate is a good lower threshold. For design inference, that is fine.
For your argument to work, as merely a starting point, you have to demonstrate the FI you compute is a reasonable approximation if the true FI, not confused by neutral evolution and correctly calling negative controls.
I have discussed that. Why do you doubt that it is a reasonable approximation? It is not confused by neutral evolution, why should it? The measurement itself is based on the existence of neutral evolution. Why should that generate any confusion? I have said that my procedure cannot evaluate functional divergence as separate from neutral divergence. Therefore, what I get is a lower threshold. And so? What is the problem? As a lower threshold I declare it, and as a lower threshold I use it in my reasonings. Where is the problem?
You also have to use a better model of evolution than random trials or walks.
Of course, as said, I am not considering NS. Yet. I will. But I have already pointed to two big OPs of mine, one for RV and one for NS. You can find a lot of material there, if you have the time. However, I will come to that. And to the role of NS in generating FI. Just give me time. But RV is a random system of events. It must be treated and analyzed as such.gpuccio
August 27, 2019
August
08
Aug
27
27
2019
09:41 AM
9
09
41
AM
PST
Note to the "experts" at PS: You are equivocating on the vast (and very well-documented) difference between what physicists refer to as "physical" or "structural" information, and the semantic information contained in the gene system (the source of specification and control over protein synthesis). Joshua Swamidass, biological information is semantic and rate-independent, requiring a coordinated set of non-integrable constraints, to be actualized in a non-reversible process. The process itself requires complimentary descriptions in order to be understood. Structural or physical "information", on the other hand, is purely dynamic and reversible. Clearly, you should know these things, and should not present yourself as an expert on the subject while casually equivocating between these diametrically-opposed meanings. A protein is the product of an encoded description; the position of the stars in the night sky are not. Neither are the locations of islands on the open sea. Neither are weather patterns and tornadoes.Upright BiPed
August 27, 2019
August
08
Aug
27
27
2019
09:39 AM
9
09
39
AM
PST
Art at PS:
Thanks again, @gpuccio. I would like to commend you for broaching this subject, as it stands in contrast to the approaches used by the ID vanguard. I have long been of the opinion that the relevant metric that ID proponents should be measuring is something akin to informational work, which may be like what you describe here. I suspect that there are serious issues with the approaches one may take to estimate this property, but the concept seems to make sense to me.
Thank you! :) I will come to your tornado as soon as possible. In the meantime, the discussion with Swamidass can maybe help clarify some points. I will come back to the discussion later.gpuccio
August 27, 2019
August
08
Aug
27
27
2019
09:38 AM
9
09
38
AM
PST
Swamidass at PS:
One example is the configuration of stars in the sky. Far more than 500 bits. Another example is the location of islands in the sea. Another example is the weather patterns across the globe for the last century. And yes, all these objects can be defined by a functional specification. This is all functional information.
I start with you, because at least I have not to show meteorologic abilities that I do not posses! Art’s tornadoes will be more of a challenge. :slight_smile: I am not sure if the problem here is a big misunderstanding of what FI is. Maybe, let’s see. According to my definition, FI can be measured for any possible function. Any observer is free to define a function as he likes, but the definition must be explicit and include a level to assess the function as present. Then, FI can be measured for the function, and objects can be categorized as expressing that function or not. An important point is that FI can be generated in non design systems, but only at very low levels. The 500 bit threshold is indeed very high, and it is appropriate to really exclude any possible false positive in the design inference. I think that I must also mention a couple of criteria that could be important in the following discussion. I understand that I have not clarified them before, but believe me, it’s only because the discussion has been too rushed. Those ideas are an integral aprt of all ID thinking, and you can find long discussions made by me at UD in the past trying to explain them to other interlocutors. The first idea you should be familiar with, if you have considered Dembski’s explanatory filter. The idea is that, before making a design inference, we should always ascertain that the configurations we observe are not the simple result of known necessity laws. For the moment, I will not go deeper on this point. The second point is about specification, not only functional specification, but any kind of specification. IOWs, any type of rule that generates a binary partition in the search space, defining the target space. The rule is simple enough. If we are dealing with pre-specifications, everything can work. IOWs, let’s take the simple example of a deck of cards. If I declare in advance a specific sequence of them, and then I shuffle the cards and I get the sequence, something strange is happening. A design inference (some trick) is certainly allowed. But if we are dealing with post-specifications, IOWs we give the rule after the object had come into existence and after we have observed it, then the rule must be independent from the specific configuration of bits observed in the object. Another way to say that is that I cannot use the knowledge of the individual bits observed in the object to build the rule. In that case, I am only using an already existin generic infomration to build a function. So, going back to our deck of cards, observing a sequence that shows the cards in perfect order is always a strange result, but I cannot say: well, my function is that the cards must have the following order, and then just read the order of a sequence that has already been obtained and observed. This seems very trivial, but I want to make it clear because a lot of people are confused about these things. So, I can take a random sequence of 100 bits and then set it as electronic key to a safe. Of coruse, there is nothing surprising in that: the random series was a random series, maybe obtained by tossing a fair coin, and it had no special FI. But, when I set it as a key, the functional information in that sequence becomes 100 bits. Of course, it will be almost impossible to get that sequence by a new series of coin tossing. Another way to say these things is that FI is about configurations of configurable switches, each of which can in principle exist in at least two different states, so that the specific configuration is the one that can implement a function. This concept is due to Abel. OK, let’s go back to your examples. Let’s take the first one, the other will probably be solved automatically. The configuration of stars in the sky. OK, it is a complex configuration. As it is the configuration of grain of sands on a beach. So, what is the function? You have to define a function, and a level of it that can define it as present or absent in the object we are observing. What is the object? The starry sky? You mean our galaxy, or at least the part we can observe from our planet? What is the function? You have to specify all these things. Frankly, I cannot see any relevant FI in the configuration of stars. Maybe we can define some function for which a few bits could be computed, but no more than that. So, as it is your example, plese clarify better. Fior me, it is rather obvious that none of your examples shows any big value of FI for any possible function, And that includes Art’s tornado, which of course I will discuss separately with him. Looking forward to your input about that.gpuccio
August 27, 2019
August
08
Aug
27
27
2019
09:36 AM
9
09
36
AM
PST
And then we have the totally untestable imagination for "evidence" against gpuccio:
There could have been pre-cursor genes that had functional tertiary structures that were then modified through subsequent mutations. That pre-cursor gene could have been lost to history. If there is billions of years of evolution that led to a gene that was one mutation away from finding a novel function, then you would need to factor in those billions of years of evolution into your calculations.
Just say anything because you don't have to support it.
Nature is full of species that found different strategies for adapting to environmental challenges, and I would suggest this extends to the molecular level.
Because they were intelligently designed with the ability to do so.
For example, intron splicing is just one possible function that could have arisen. A completely different method for dealing with misfolded proteins could have emerged.
Just like that- magic! For example there isn't any evidence that blind and mindless processes can put together a system of components for intron splicing. And blind and mindless processes obviously cannot tell nor does it care about misfolded proteins. They don't care about proteins. The problem is they expect us to blindly accept that blind and mindless processes did all of that without trying to nor wanting to. That is why they get so frenzied when someone like gpuccio comes up with a methodology that threatens their core beliefs https://discourse.peacefulscience.org/t/comments-on-gpuccio-functional-information-methodology/7560/89ET
August 27, 2019
August
08
Aug
27
27
2019
09:09 AM
9
09
09
AM
PST
And moar entertainment from Joshua:
It is guided by many things, including natural selection. If you neglect selection, you are not even modeling the most fundamental basics.
Natural selection is a process of elimination. It is NOT a process of selection. It is blind and mindless. It doesn't guide, it culls. And it is not some magical feedback. It boils down to nothing more than contingent serendipity. If you neglect that, Joshua, you wind up making bald assertions that you will never support. And Joshua is still hung up on his cancer strawman. I doubt anything will ever get him off of that. EricMH just gave up. https://discourse.peacefulscience.org/t/gpuccio-functional-information-methodology/7549/116ET
August 27, 2019
August
08
Aug
27
27
2019
09:01 AM
9
09
01
AM
PST
glipsnort at PS:
I’ll offer the same example I gave in the other thread: the human immune system.
Are you saying that the human immune system is a non biological object? Interesting. You have some serious misconceptions about the immune system, however, but just now I have other things to do. If you have one single counter-example of a non biological object that exhibits more than 500 bits of FI and is not a designed human artifact, please do it. That was the request.gpuccio
August 27, 2019
August
08
Aug
27
27
2019
07:48 AM
7
07
48
AM
PST
sfmatheson at PS:
If I have misunderstood the metric, please let me know
Yes, you have. Definitely.
I know you are also calculating some kind of probability, but that probability is completely meaningless without very important additional information.
What additional information? I can get no sense from tour discourses. You are certainly in good faith, and you also admit that you are not an expert, if I understand well. Maybe that’s why your objections are not clear at all. I say that with no bad intentions, but because I really don’t understand what you mean. So please, explain what is the missing information without which probability would be meaningless here. But please, clarify of what probability you are speaking. The bitscore is linked to a probability in the Blast allgorithm itself. It is given as E value, and for values small enough it is practically the same thing as a p value. It expresses the expected number of similar homologies in a similar search in the database if the two sequences were unrelated. For many Blast resulta resulting in high similarity, that value is given as 0. The probability which I mention in ID theory is a different thing. It is the probability linked to the FI value, and expresses the probability to find the target in a random search or walk, in one attempt. I think these concepts are rather precise. What is the additional information you need?
I know I’m not missing anything about the probability calculations, because those can’t be meaningful by themselves. (That’s old, old hat in “design” debates.)
That’s simply a meaningless statement. What do you mean?
The point about neutral evolution is about applying a method to a negative control.
The point about neutral evolution is that it happens. What do you mean?
When you read phylogenetics papers, you should notice this.
I have not made a phylogenetic analysis, nor can I see any reason why I should do that. I have used common phylogenetic knowledge, at the best of my understanding, to make very simple assumptions. That vertebrates derive from some common chordate ancestor, that cartilaginous fishes split from bony fishes rather early in the natual history of vertebrates, that humans derive from bony fishes and not from cartilaginous fishes. Am I wrong? The times I have given in my graphs are only a gross approximation. They are not important in themselves.
Without good answers, you can only say that you used BLAST to “measure” sequence conservation in a few hand-picked evolutionary trajectories. And that, my friend, is not informative. My opinion is that it isn’t even a good start.
You are free to think as you like. But I have not analyzed a few hand picked trajectories. Now that I have access to my database, I can gibe you more precise data (but you could find them in my linked OPs). For example, I have evaluated the information jump at the vertebrate transition (IOWs the human conserved similarity in cartilaginous fishes minus the human conserved similarity in pre-vertebrates) for all human proteins, and using all protein sequences of non vertebrate deuterostomes and chordates and of cartilaginous fishes in the NCBI database. Here are the results, expressed both as absolute bitscore difference, and as bits per aminoacid (baa): Absolute difference in bitscore: Mean = 189.58 bits SD = 355.6 bits Median = 99 bits Difference in baa: Mean = 0.28796 baa SD = 0.3153 baa Median = 0.264275 baa As you can see from the values of the medians, half of human proteins have an information difference at the vertebrate transition that is lower than 99 bits and 0.26 baa. Is that a good negative control, if compared to the 1250 bits of CARD 11? Remember, these are logarithmic values! 75th centile is 246 bits and 0.47 baa. That means that 25% of human proteins have values higher than that. And I could show you that values are very significantly higher in proteins involved in the immune system and in brain maturation. I don’t know if that means anything for you. However, for me these are very interesting data. About FI in evolutionary history.gpuccio
August 27, 2019
August
08
Aug
27
27
2019
07:46 AM
7
07
46
AM
PST
ET at #459: Did Shaffner really write that? "coding for hundreds of specific antibodies that are highly functional, each precisely tuned to a protein on a particular pathogen or pathogen strain" This is just ignorance. However, I have not the time now to explain why. I will do it later.gpuccio
August 27, 2019
August
08
Aug
27
27
2019
07:09 AM
7
07
09
AM
PST
Steve Shaffner doubles-down:
I’ll offer the same example I gave in the other thread: the human immune system. Your body contains DNA with more than 500 bits of FI, coding for hundreds of specific antibodies that are highly functional, each precisely tuned to a protein on a particular pathogen or pathogen strain. You were not born with DNA that had that information in it; it was generated by a process of random mutation and selection.
Question-begging nonsense. And according to ID we were born with the information required to produce the immune system and the immune system was the product of intelligent design. There isn't any evidence nor a way to test the claim that any immune system evolved via blind and mindless processes. If those existed ID would have been a non-starter and this discussion would not be taking place. https://discourse.peacefulscience.org/t/gpuccio-functional-information-methodology/7549/95ET
August 27, 2019
August
08
Aug
27
27
2019
06:58 AM
6
06
58
AM
PST
Apparently GPuccio's time and effort is highly appreciated by a number of readers that most probably are following GP's discussion with his objectors at PS. Note that the current GP's OP remains in the most popular list: Popular Posts (Last 30 Days)
Controlling the waves of dynamic, far from… (2,020) Darwinist Jeffrey Shallit asks, why can’t… (1,424) Are extinctions evidence of a divine purpose in life? (1,316) UD Newswatch: Epstein Suicide (971) “Descartes’ mind-body problem” makes nonsense of materialism (970)
Visited 3,961 times since posted July 10, 816 visits today!jawa
August 27, 2019
August
08
Aug
27
27
2019
06:14 AM
6
06
14
AM
PST
And Timothy Horton continues the question-begging and stupidity:
That also fails because biological systems show such high FI values without being designed.
No evidence provided.
Objects produced with evolutionary algorithms also show the process can increase FI with the amount only being limited by how long the algorithm is allowed to run.
Evolutionary algorithms are examples of evolution by means of telic processes. Evos will just say anything without caring how ignorant it makes them appear.ET
August 27, 2019
August
08
Aug
27
27
2019
06:05 AM
6
06
05
AM
PST
They exclude evolution by means of intelligent design. To them all evolution has to be blind and mindless and evolution by design can never even exist. And yet that is what ID says- that organisms were intelligently designed with the ability to adapt and evolve. As Dr. Spetner wrote:
He [the Designer] indeed seems to have “carefully crafted” information in His species giving them the ability to respond to environmental stimuli to alter their own genome to adapt to new environments. He then evidently let them wander where they will with the ability to adapt.- Dr. Lee Spetner “the Evolution Revolution” p 108
ET
August 27, 2019
August
08
Aug
27
27
2019
05:58 AM
5
05
58
AM
PST
Yes, gpuccio. As I have said you are facing a steep, uphill climb with that group. But it should allow you to refine your arguments, anyway. They might not listen but others will.ET
August 27, 2019
August
08
Aug
27
27
2019
05:55 AM
5
05
55
AM
PST
ET at #452: So, for Rumracket it is no issue at all? Just iterations? Good to know! :)gpuccio
August 27, 2019
August
08
Aug
27
27
2019
05:54 AM
5
05
54
AM
PST
Et at #450: "The immune system was intelligently designed with the ability to do that, Steve. Producing the immune system via blind and mindless processes is what is impossible." Of course. Are they really offering that kind of arguments?gpuccio
August 27, 2019
August
08
Aug
27
27
2019
05:52 AM
5
05
52
AM
PST
Rumraket:
Another issue is that no attempt is made at evaluating the probability of the design event.
LoL! That's because the odds of a designing agency being able to design what they do is exactly 1 to 1. The probability of being dealt a hand of cards in a poker game is 1. The probability of someone rolling dice in a craps game is 1.
Whether you look at the evolutionary history of individual proteins, or the complete genomes in which the genes encoding these proteins are encoded, you can see how through many iterations, a sequence that looks like it is very unlikely and has high FI could have evolved. It’s really no issue at all.
Especially if you don't care about science. However science requires your claims to be testable and as of today, they are not.ET
August 27, 2019
August
08
Aug
27
27
2019
05:50 AM
5
05
50
AM
PST
Swamidass et al. (at PS): So, here is your question #1:
What empirical evidence do you have that demonstrates FI increases are unique to design?
I have explained that the connection is empirical, even if with a good rationale. I quote myself: Leaving aside biological objects (for the moment), there is not one single example in the whole known universe where FI higher than 500 bits arises without any intervention of design. On the contrary, FI higher than 500 bits (often much higher than that) abunds in designed objects. I mean human artifacts here. This is the empirical connection. Based on observed facts. Of course, you are not convinced. You ask for more, and you raise objections and promises of counter-examples. That’s very good. So, let’s go to my two statements. I will try to support them both. But in reverse order. My second statement is: “FI higher than 500 bits (often much higher than that) abunds in designed objects. I mean human artifacts here.” Your objection:
As a technical point, without clarifying precisely how FI is defined, this is not at all clearly the case.
But I have given a very precise definition of FI. What is the problem here? To be more clear, I will describe here the three main classes of human artifacts, designed objects, where “FI higher than 500 bits (often much higher than that) abunds”. They are: a) Language b) Software c) Machines The first two are in digital form, so I will use one of them as an example, in particular language. I have shown in detail how FI can be indirectly computed, as a lower threshold, for a piece of language. I link here my OP about that: An Attempt At Computing DFSCI For English Language https://uncommondescent.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/ A clarification: dFSCI is the acronym I used for some time in the past to point to the specific type of information I was discussing. It means digital Functionally Specified Complex Information. It was probably too complicated, so later I started to use just Funtional Information, specifying when it is in digital form. The piece of language I analyze in the OP is a Shakespeare Sonnet (one of my favourite, I must say). My simple conclusion is that a reliable lower threshold of FI of such a sonnet is more than 800 bits. The true FI is certainly much more than that. There has been a lot of discussion about that OP, but nobody, even on the other side, has really questioned my procedure. So, this is a good example of how to compute FI in language, and of one object that has much more than 500 bits of FI. And is designed. Of course, Hamlet or any other Shakespeare drama have certainly a much higher FI than that. The same point can be easily made for software, and for machines (which are usually analogic, so in that case the procedure is less simple). So, I think that I have explained and supported my second point. If you still do not have a clear understanding of my definition of FI, and how to apply it to that kind of artifacts, please explain why. So, let’s go to my first statement. “Leaving aside biological objects (for the moment), there is not one single example in the whole known universe where FI higher than 500 bits arises without any intervention of design.” I maintain that. Absolutely. Your objection:
Why should we agree with this? It seems obviously false. What evidence do can you present to support this assumption? There are examples of non-designed processes processes we can directly observe producing FI. We can observe high amounts of FI in cancer evolution too, which you agree is not designed. We also see high amounts of FI in viruses, which you also agree are not designed. All these, and more, are all counter examples to your assumption.
OK, I invite you and anybody else to present and defend one single counter-example. Please do it. You mention two things that you have offered before. a) Cancer b) Viruses. I have already declared that cancer is not an example of a design system, and I maintain it. Technically, it is a biological example, but as I have agreed that it is not a design system, I am ready to discuss it to show that you are wrong in this case. I want, however, to clarify that I stick to my declared principle to avoid any theological reference in my discussions. I absolutely agree that cancer is not designed, but the reason has nothing to do with the idea that “God would not do it”. It is not designed because facts show no indication of design there. I am ready to discuss that, referring to your posts here about the issue. For viruses, I was, if you remember, more cautious. And I still am. The reason is that I do not understand well your point. Are you referring to the existence of viruses, or to their ability to quickly adapt? For the second point, I would think that it is usually a non design scenario, fully in the range of what RV + NS can do. I must state again, however, that I am not very confident in that field, so I could be wrong in what I say. For the first point, I am rather confident that viruses have high levels of FI in their small genomes, and proteins. They are not extremely complex, but still the genes and proteins, IMO, are certainly designed. So, are viruses designed? Probably. My only doubt is that I don’t understand well what are the current theories about the origin of viruses. My impression is that there is still great uncertainty about that issue. I would be happy to hear what you think. In a sense, viruses could be derived from bacteria or other organisms. Their FI could originate elsewhere. But again, I have not dealt in depth with those issues, and I am ready to accept any ideas or suggestions. Again, I have no problem with the idea that viruses may be designed. If they are, they are. So, my support of my first statement is very simple. I maintain that empirically there is no known example of non biological objects exhibiting more than 500 bits of FI that are not designed human artifacts, I invite everyone, including you, to present one counter-example and defend it. I am also ready to discuss your biological example of cancer. That requires, of course, a separate discussion in a later comment. For viruses, please explain better what is your point. The information in their genes and proteins is of course complex, and designed. Their adapttations, instead, as far as I can understand, do not generate any complex FI.gpuccio
August 27, 2019
August
08
Aug
27
27
2019
05:49 AM
5
05
49
AM
PST
1 8 9 10 11 12 25

Leave a Reply