Uncommon Descent Serving The Intelligent Design Community

Controlling the waves of dynamic, far from equilibrium states: the NF-kB system of transcription regulation.

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

I have recently commented on another thread:

about a paper that (very correctly) describes cells as dynamic, far from equilibrium systems, rather than as “traditional” machines.

That is true. But, of course, the cell implements the same functions as complex machines do, and much more. My simple point is that, to do that, you need much greater functional complexity than you need to realize a conventional machine.

IOWs, dynamic, far from equilibrium systems that can be as successful as a conventional machine, or more, must certainly be incredibly complex and amazing systems, systems that defy everything else that we already know and that we can conceive. They must not only implement their functional purposes, but they must do that by “harnessing” the constantly changing waves of change, of random noise, of improbability. I have commented on those ideas in the mentioned thread, at posts #5 and #8, and I have quoted at posts #11 and #12 a couple of interesting and pertinent papers, introducing the important concept of robustness: the ability to achieve reliable functional results in spite of random noise and disturbing variation.

In this OP, I would like to present in some detail a very interesting system that shows very well what we can understand, at present, of that kind of amazing systems.

The system I will discuss here is an old friend: it is the NF-kB system of transcription factors (nuclear factor kappa-light-chain-enhancer of activated B cells). We are speaking, therefore, of transcription regulation, a very complex topic that I have already discussed in some depth here:

I will remind here briefly that transcription regulation is the very complex process that allows cells to be completely different using the same genomic information: IOWs, each type of cell “reads” differently the genes in the common genome, and that allows the different types of cell differentiation and the different cell responses in the same cell type.

Transcription regulation relies on many different levels of control, that are summarized in the above quoted OP, but a key role is certainly played by Transcription Factors (TFs), proteins that bind DNA and act as activators or inhibitors of transcription at specific sites.

TFs are a fascinating class of proteins. There are a lot of them (1600 – 2000 in humans, almost 10% of all proteins), and they are usually medium sized proteins, about 500 AA long, containing at least one highly conserved domain, the DNA binding domain (DBD), and other, often less understood, functional components.

I quote again here a recent review about human TFs:

The Human Transcription Factors

The NK-kB system is a system of TFs. I have discussed it in some detail in the discussion following the Ubiquitin thread, but I will describe it in a more systematic way here.

In general, I will refer a lot to this very recent paper about it:

Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

The NF-kB system relies essentially on 5 different TFs (see Fig. 1 A in the paper):

  1. RelA  (551 AAs)
  2. RelB  (579 AAs)
  3. c-Rel  (619 AAs)
  4. p105/p50
    (968 AAs)
  5. p100/p52  (900 AAs)

Those 5 TFs work forming dimers, homodimers or heterodimers, for a total of 15 possible compbinations, all of which have been found to work in the cell, even if some of them are much more common.

Then there are at least 4 inhibitor proteins, collectively called IkBs.

The mechanism is apparently simple enough. The dimers are inhibited by IkBs and therefore they remain in the cytoplasm in inactive form.

When an appropriate signal arrives to the cell and is received by a membrane receptor, the inhibitor (the IkB molecule) is phosphorylated and then ubiquinated and detached from the complex. This is done by a protein complex called IKK. The free dimer can then migrate to the nucleus and localize there, where it can act as a TF, binding DNA.

This is the canonical activation pathway, summarized in Fig. 1. There is also a non canonical activation pathway, that we will not discuss for the moment.


Mechanism of NF-κB action. In this figure, the NF-κB heterodimer consisting of Rel and p50 proteins is used as an example. While in an inactivated state, NF-κB is located in the cytosol complexed with the inhibitory protein IκBα. Through the intermediacy of integral membrane receptors, a variety of extracellular signals can activate the enzyme IκB kinase (IKK). IKK, in turn, phosphorylates the IκBα protein, which results in ubiquitination, dissociation of IκBα from NF-κB, and eventual degradation of IκBα by the proteasome. The activated NF-κB is then translocated into the nucleus where it binds to specific sequences of DNA called response elements (RE). The DNA/NF-κB complex then recruits other proteins such as coactivators and RNA polymerase, which transcribe downstream DNA into mRNA. In turn, mRNA is translated into protein, resulting in a change of cell function.

Attribution: Boghog2 at English Wikipedia [Public domain]

Now, the purpose of this OP is to show, in greater detail, how this mechanism, apparently moderately simple, is indeed extremely complex and dynamic. Let’s see.

The stimuli.

First of all, we must understand what are the stimuli that, arriving to the cell membrane, are capable to activate the NF-kB system. IOWs, what are the signals that work as inputs.

The main concept is: the NF-kB system is a central pathway activated by many stimuli:

  1. Inflammation
  2. Stress
  3. Free
    radicals
  4. Infections
  5. Radiation
  6. Immune
    stimulation

IOWs, a wide variety of aggressive stimuli can activate the system

The extracellular signal arrives to the cell usually through specific cytokines, for example TNF, IL1, or through pathogen associated molecules, like bacterial lipopolysaccharides (LPS). Of course there are different and specific membrane receptors, in particular IL-1R (for IL1) , TNF-R (for TNF), and many TLRs (Toll like receptors, for pathogen associated structures). A special kind of activation is implemented, in B and T lymphocytes, by the immune activation of the specific receptors for antigen epitopes (B cell receptor, BCR, and T cell receptor, TCR).

The process through which the activated receptor can activate the NF-kB dimer is rather complex: it involves, in the canonical pathway, a macromolecular complex called IKK (IkB kinase) complex, comprising two catalytic kinase subunits (IKKa and IKKb) and a regulatory protein (IKKg/NEMO), and involving in multiple and complex ways the ubiquitin system. The non canonical pathway is a variation of that. Finally, a specific protein complex (CBM complex or CBM signalosome) mediates the transmission from the immune BCR or TCR to the canonical pathway. See Fig. 2:

From: NF-κB Activation in Lymphoid Malignancies: Genetics, Signaling, and Targeted Therapy – Scientific Figure on ResearchGate. Available from: https://www.researchgate.net/figure/Increased-activity-of-the-CARMA1-BCL10-MALT1-signalosome-drives-constitutive-NF-kB_fig2_324089636 [accessed 10 Jul, 2019]
Figure 3 – NF-κB Activation in Lymphoid Malignancies: Genetics, Signaling, and Targeted Therapy
available via license: Creative Commons Attribution 4.0 International

I will not go into further details about this part, but those interested can have a look at this very good paper:

TLR-4, IL-1R and TNF-R signaling to NF-kB: variations on a common theme

In particular, Figg. 1, 2, 3.

In the end, as a result of the activation process, the IkB inhibitor is degraded by the ubiquitin system, and the NK-kB dimer is free to migrate to the nucleus.

An important concept is that this is a “rapid-acting” response system, because the dimers are already present, in inactive form, in the cytoplasm, and must not be synthesized de novo: so the system is ready to respond to the activating signal.

The response.

But what is the cellular response?

Again, there are multiple and complex possible responses.

Essentially, this system is a major regulator of innate and adaptive immune responses. As such, it has a central role in the regulation of inflammation, in immunity, in autoimmune processes, and in cancer.

Moreover, the NF-kB system is rather ubiquitous, and is present and active in many different cell types. And, as we have seen, it can be activated by different stimuli, in different ways.

So, the important point is that the response to activation must be (at least):

  1. Lineage-specific
  2. Stimulus-specific

IOWs, different cells must be able to respond differently, and each cell type must respond differently to different stimuli. That gives a wide range of possible gene expression patterns at the transcription level.

The following paper is a good review of the topic:

Selectivity of the NF-κB Response

For example, IL2 is induced by NF-kB activayion in T cells, but not in B cells (lineage specific response). Moreover, specific cell types can undergo specific, and often different, cell destinies after NF-kB activation: for example, NK-kB is strongly involved in the control and regulation of T and B cell development.

From:

30 years of NF-κB: a blossoming of relevance to human pathobiology

“B and T lymphocytes induce NF-κB in adaptive immune responses through the CARD11:Bcl10:MALT1 (CBM) complex (Hayden and Ghosh, 2008). Newly expressed genes promote lymphocyte proliferation and specific immune functions including antibody production by B cells and the generation of cytokines and other anti-pathogen responses by T cells.”

And, in the same cell type, certain promoters regulated by NF-kB require additional signaling (for example,  in human dendritic cells promoters for Il6Il12b, and MCP-1 require additional p38 histone phosphorylation to be activated), while others can be activated directly (stimulus-specific response).

So, to sum up:

  1. A variety of stimuli can activate the system in different ways
  2. The system itself has its complexity (different dimers)
  3. The response can be widely different, according to the cell type where it happens, and to the type of stimuli that have activated the system, and probably according to other complex variables.
  4. The possible responses include a wide range of regulations of inflammation, of the immune system, of cell specifications or modifications, and so on.

How does it work?

So, what do we know about the working of such a system?

I will ignore, for the moment, the many complexities of the activation pathways, both canonical and non canonical, the role of cyotkines and receptors and IKK complexes, the many facets of NEMO and of the involvement of the ubiquitin system.

For simplicity, we will start with the activated system: the IkB inhibitor has been released from the inactive complex in the cytoplasm, and some form of NF-kB dimer is ready to migrate to the nucleus.

Let’s remember that the purpose of this OP is to show that the system works as a dynamic, far from equilibrium system, rather than as a “traditional” machine. And that such a way to work is an even more amazing example of design and functional complexity.

To do that; I will rely mainly on the recent paper quoted at the beginning:

Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

The paper is essentially about the NF-kB Target Selection Puzzle. IOWs, it tries to analyze what we know about the specificity of the response. How are specific patterns of transcription achieved after the activation of the system? What mechanisms allow the selection of the right genes to be transcribed (the targets) to implement the specific patterns according to cell type, context, and type of stimuli?

A “traditional” view of the system as a machine would try to establish rather fixed connections. For example, some type of dimer is connected to specific stimuli, and evokes specific gene patterns. Or some other components modulate the effect of NK-kB, generate diversification and specificity of the response.

Well, those ideas are not completely wrong. In a sense, the system does work also that way. Dimer specificity has a role. Other components have a role. In a sense, but only in a sense, the system works as though it were a traditional machine, and uses some of the mechanisms that we find in the concept of a traditional biological machine.

But that is only a tiny part of the real thing.

The real thing is that the system really works as a dynamic, far from equilibrium system, harnessing huge random/stochastic components to achieve robustness and complexity and flexibility of behavior in spite of all those non finalistic parts.

Let’s see how that happens, at least for the limited understanding we have of it. It is important to consider that this is a system that has been studied a lot, for decades, because of its central role in so many physiological and pathological contexts, and so we know many things. But still, our understanding is very limited, as you will see.

So, let’s go back to the paper. I will try to summarize as simply as possible the main concepts. Anyone who is really interested can refer to the paper itself.

Essentially, the paper analyzes three important and different aspects that contribute to the selection of targets at the genomic level by our TFs (IOWs, our NF-kB dimers, ready to migrate to the nucleus. As the title itself summarizes, they are:

  1. Abundance
  2. Affinity
  3. Binding site availability

1. Abundance

Abundance is referred here to two different variables: abundance of NF-kB Binding Sites in the genome and abundance of Nucleus-Localized NF-kB Dimers. Let’s consider them separately.

1a) Abundance of NF-kB Binding Sites in the genome:

It is well known that TFs bind specific sites in the genome. For NF-kB TFs, the following consensus kB site pattern has been found:

 5′-GGGRNWYYCC-3′

where R, W, Y, and N, respectively denote purine, adenine or thymine, pyrimidine, and any nucleotide.

That simply means that any sequence corresponding to that pattern in the genome can, in principle, bind NF-kB dimers.

So the problem is: how many such sequences do exist in the human genome?

Well, a study based on RelA has evaluated about 10^4 consensus sequences in the whole genome, but as NF-kB dimers seem to bind even incomplete consensus sites, the total number of potential binding sites could be nearer to 10^6

1b) Abundance of Nucleus-Localized NF-kB Dimers:

An estimate of the abundance of dimers in the nucleus after activation of the system is that about 1.5 × 10^5 molecules can be found, but again that is derived from studies about RelA only. Moreover, the number of molecules and type of dimer can probably vary much according to cell type.

So, the crucial variable, that is the ratio between binding sites and available dimers, and which could help undertsand the rate of sites saturation in the nucleus, remains rather undecided, and it seems very likely that it can vary a lot in different circumstances.

But there is another very interesting aspect about the concentration of dimers in the nucleus. According to some studies, NF-kB seems to generate oscillations of its nuclear content in some cell types, and those oscillation can be a way to generate specific transcription patterns:

NF-kB oscillations translate into functionally related patterns of gene expression

For example, this very recent paper :

NF-κB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration

shows at Fig. 3 the occupancy curve of binding sites at nuclear level after NF-kB activation in two different cell types.

In fibroblasts, the curve is a periodic oscillation, with a frequency that varies according to various factors, and translates into different transcription scenarios accordingly:

Gene expression dynamics scale with the period (g1) and amplitude (g2) of these oscillations, which are influenced by variables such as signal strength, duration, and receptor identity.


In macrophages, instead, the curve is rather:

a single, strong nuclear translocation event which persists for as long as the stimulus remains and tends to remain above baseline for an extended period of time.

In this case, the type of transcription will be probably regulated by the are under the curve, ratehr than by the period and amplitude of the oscialltions, as happened in fibroblasts.

Interestingly, while in previous studies it seemed that the concentration of nuclear dimers could be sufficient to saturate most or all binding sites, that has been found not to be the case in more recent studies. Again from the paper about abundance:

in fact, this lack of saturation of the system is necessary to generate stimulus- and cell-type specific gene expression profiles

Moreover, the binding itself seems to be rather short-lived:

Interestingly, it is now thought that most functional NF-kB interactions with chromatin—interactions that lead to a change in transcription—are fleeting… a subsequent study using FRAP in live cells expressing RelA-GFP showed that most RelA-DNA interactions are actually quite dynamic, with half-lives of a few seconds… Indeed, a recent study used single-molecule tracking of individual Halo-tagged RelA molecules in live cells to show that the majority (∼96%) of RelA undergoes short-lived interactions lasting on average ∼0.5 s, while just ∼4% of RelA molecules form more stable complexes with a lifetime of ∼4 s.

2. Affinity

Affinity of dimers for DNA sequences is not a clear cut matter. From the paper:

Biochemical DNA binding studies of a wide variety of 9–12 base-pair sequences have revealed that different NF-kB dimers bind far more sequences than previously thought, with different dimer species exhibiting specific but overlapping affinities for consensus and non-consensus kB site sequences.

IOWs, we have different dimers (15 different types) binding with varying affinity different DNA sequences (starting from the classical consensus sequence, but including also incomplete sequences). Remember that those sequences are rather short (the consensus sequence is 10 nucleotides long), and that there are thousands of such sequences in the genome.

Moreover, different bindings can affect transcription differently. Again, from the paper:

How might different consensus kB sites modulate the activity of the NF-kB dimers? Structure-function studies have shown that binding to different consensus kB sites can alter the conformation of the bound NF-kB dimers, thus dictating dimer function When an NF-kB dimer interacts with a DNA sequence, side chains of the amino  acids located in the DNA-binding domains of dimers contact the bases exposed in the groove of the DNA. For different consensus kB site sequences different bases are exposed in this groove, and NF-kB seems to alter its conformation to maximize interactions with the DNA and maintain high binding affinity. Changes in conformation may in turn impact NF-kB binding to co-regulators of transcription, whether these are activating or inhibitory, to specify the strength and dynamics of the transcriptional response. These findings again highlight how the huge array of kB binding site sequences must play a key role in modulating the transcription of target genes.

Quite a complex scenario, I would say!

But there is more:

Finally, as an additional layer of dimer and sequence-specific regulation, each of the subunits can be phosphorylated at multiple sites with, depending on the site, effects on nearly every step of NF-kB activation.

IOWs, the 15 dimers we have mentioned can be phosphorylated in many different ways, and that changes their binding affinities and their effects on transcription.

This section of the paper ends with a very interesting statement:

Overall, when considering the various ways in which NF-kB dimer abundances and their affinity for DNA can be modulated, it becomes clear that with these multiple cascading effects, small differences in consensus kB site sequences and small a priori differences in interaction affinities can ultimately have a large impact on the transcriptional response to NF-kB pathway activation.

Emphasis mine.

This is interesting, because in some way it seems to suggest that the whole system acts like a chaotic system, at least at some basic level. IOWs, small initial differences, maybe even random noise, can potentially affect deeply the general working of the whole systems.

Unless, of course, there is some higher, powerful level of control.

3. Availability of high affinity kB binding sequences

We have seen that there is a great abundance and variety of binding sequences for NF-kB dimers in the human genome. But, of course, those sequences are not necessarily available. Different cell types will have a different scenario of binding sites availability.

Why?

Because, as we know, the genome and chromatin are a very dynamic system, that can exist in many different states, continuosly changing in different cell types and, in the same cell type, in different conditions..

We know rather well the many levels of control that affect DNA and chromatin state. In brief, they are essentially:

  1. DNA methylation
  2. Histone modifications (methylation, acetylation, etc)
  3. Chromatin modifications
  4. Higher levels of organization, including nuclear localization and TADs (Topologically Associating Domains)

For example, from the paper:

The promoter regions of early response genes have abundant histone acetylation or trimethylation prior to stimulation [e.g., H3K27ac, (67) and H4K20me3, (66)], a chromatin state “poised” for immediate activation…  In contrast, promoters of late genes often have hypo-acetylated histones, requiring conformational changes to the chromatin to become accessible. They are therefore unable to recruit NF-kB for up to several hours after stimulation (68), due to the slow process of chromatin remodeling.

We must remember that each wave of NK-kB activation translates into the modified transcription of a lot of different genes at the genome level. It is therefore extremely important to consider what genes are available (IOWs, their promoters can be reached by the NF-kB signal) in each cell type and cell state.

The paper concludes:

Taken together, chromatin state and chromatin organization strongly influence the selection of DNA binding sites by NF-kB dimers and, most likely, the selection of the target genes that are regulated by these protein-DNA interaction events. Analyses that consider binding events in the context of three-dimensional nuclear organization and chromatin composition will be required to generate a more accurate view of the ways in which NF-kBDNA binding affects gene transcription.

This is the main scenario. But there are other components, that I have not considered in detail for the sake of brevity, for example competition between NF-kB dimers and the complex role and intervention of other co-regulators of transcription.

Does the system work?

But does the system work?

Of course it does. It is a central regulator, as we have said, of many extremely important biological processes, above all immunity. This is the system that decides how immune cells, T and B lymphocytes, have to behave, in terms of cell destiny and cell state. It is of huge relevance in all inflammatory responses, and in our defense against infections. It works, it works very well.

And what happens if it does not work properly?

Of course, like all very complex systems, errors can happen. Those interested can have a look at this recent paper:

30 years of NF-κB: a blossoming of relevance to human pathobiology

First of all, many serious genetic diseases have been linked to mutations in genes involved in the system. You can find a list in Table 1 of the above paper. Among them, for example, some forms of SCID, Severe combined immunodeficiency, one of the most severe genetic diseases of the immune system.

But, of course, a dysfunction of the NF-kB system has a very important role also in autoimmune diseases and in cancer.

Conclusions.

So, let’s try to sum up what we have seen here in the light of the original statement about biological systems that “are not machines”.

The NF-kB system is a perfect example. Even if we still understand very little of how it works, it is rather obvious that it is not a traditional machine.

A traditional machine would work differently. The signal would be transmitted from the membrane to the nucleus in the simplest possible way, without ambiguities and diversions. The Transcription Factor, once activated, would bind, at the level of the genome, very specific sites, each of them corresponding to a definite cascade of specific genes. The result would be clear cut, almost mechanical. Like a watch.

But that’s not the way things happen. There are myriads of variations, of ambiguities, of stochastic components.

The signal arrives to the membrane in multiple ways, very different one from the other: IL1, IL17, TNF, bacterial LPS, and immune activation of the B cell receptor (BCR) or the T cell receptor (TCR) are all possible signals.

The signal is translated to the NF-kB proteins in very different ways: canonical or non canonical activation, involving complex protein structures such as:

The CBM signalosome, intermediate between immune activation of BCR or TCR and canonical activation of the NF-kB. This complex is made of at least three proteins, CARD11, Bcl10 and MALT1.

The IKK complex in canonical activation: this is made of three proteins, IKK alpha, IKK beta, and NEMO. Its purpose is to phosphorylate the IkB, the inhibitor of the dimers, so that it can be ubiquinated and released from the dimer. Then the dimer can relocate to the nucleus.

Non canonical pathway: it involves the following phosphorylation cascade: NIK -> IKK alpha dimer -> Relb – p100 dimer -> Relb – p50 dimer (the final TF). It operates during the development of lymphoid organs and is responsible for the generation of B and T lymphocytes.

Different kinds of activated dimers relocate to the nucleus.

Different dimers, in varying abundance, interact with many different binding sites: complete or incomplete consensus sites, and probably others. The interaction is usually brief, and it can generate an oscillating pattern, or a more stable pattern

Completely different sets of genes are transcribed in different cell types and in different contexts, because of the interaction of NF-kB TFs with their promoters.

Many other factors and systems contribute to the final result.

The chromatin state of the cell at the moment of the NF-kB activation is essential to determine the accessibility of different binding sites, and therefore the final transcription pattern.

All these events and interactions are quick, unstable, far from equilibrium. A lot of possible random noise is involved.

In spite of that amazing complexity and potential stochastic nature of the system, reliable transcripion regulation and results are obtained in most cases. Those results are essential to immune cell differentiation, immune response, both innate and adaptive, inflammation, apoptosis, and many other crucial cellular processes.

So, let’s go back to our initial question.

Is this the working of a machine?

Of course it is! Because the results are purposeful, reasonably robust and reliable, and govern a lot of complex processes with remarkable elegance and efficiency.

But certainly, it is not a traditional machine. It is a lot more complex. It is a lot more beautiful and flexible.

It works with biological realities and not with transistors and switches. And biological realities are, by definition, far from equilibrium states, improbable forms of order that must continuously recreate themselves, fighting against the thermodynamic disorder and the intrinsic random noise that should apparently dominate any such scenario.

It is more similar to a set of extremely clever surfers who succeed in performing elegant and functional figures and motions in spite of the huge contrasting waves.

It is, from all points of view, amazing.

Now, Paley was absolutely right. No traditional machine, like a watch, could ever originate without design.

And if that is true of a watch, with its rather simple and fixed mechanisms, how much truer it must be for a system like NF-kB? Or, for that, like any cellular complex system?

Do you still have any doubts?

Added graphic: The evolutionary history, in terms of human conserved information, of the three proteins in the CBM signalosome.
On the y axis, homologies with the human protein as bits per aminoacid (bpa). On the x axis, approximate time of appearance in million of years.
The graphic shows the big information jump in vertebrates for all three protens , especially CARD11.


Added graphic: two very different proteins and their functional history


Added graphic (for Bill Cole). Functional history of Prp8, collagen, p53.
Comments
EugeneS: The statement was: "“Is the designer a biological organism? Is the designer a physical entity?” I will answer these two together. While we cannot say who or what the designer (or designers) is, I find very reasonable that he should not be a physical organism. The reason for that is, again, empirical, and is similar to my “confutation” of the imaginary algorithm: if one or more physical designers had been acting on our planet throughout natural history, we should have some evidence of that. But there is none. So the best hypothesis is that the designer or designers are not physical like us." What I mean is that the continuing presence of one or more physical designers, with some physical body, should have left some trace, reasonably. A physical designer has to be physically present at all design intervertions. And physical agents, usually, leave some trace of themselves. I mean, betond the design itself. Of course the design itself is evidence of a designer. But in the case of a non physical designer, we don't expect to find further physical evidence, beyond the design itself. In the case of a physical designer, I would expect something, especially considering the many acts of design in natural history. This is what I meant.gpuccio
July 16, 2019
July
07
Jul
16
16
2019
09:13 AM
9
09
13
AM
PST
GP (101) "...we should have some evidence of that. But there is none." This is where you lost me. Isn't what you so painstakingly analyse here and in other OPs something that constitutes the said evidence? Maybe I am wrong and I have missed out part of the conversation. But it is exactly what we observe that strongly suggests design. It is precisely that. All the rest is immaterial. Consequently, it must be the evidence that you are saying does not exist. I hope I am just misinterpreting what you said there.EugeneS
July 16, 2019
July
07
Jul
16
16
2019
08:36 AM
8
08
36
AM
PST
PeterA and all: An interesting example of complexity is the CBM signalosome. As said briefly in the OP, it is a protein complex made of three proteins: CARD11 (Q9BXL7): 1154 AAs in the human form. Also known as CARMA1. BCL10 (O95999): 233 AAs in the human form. MALT1 (Q9UDY8): 824 AAs in the human form. These three proteins have the central role in transferring the signal from the specific immune receptors in B cells (BCR) and T cells (TCR) to the NF-kB activation system (see Fig. 3 in the OP). IOWs, they signal the recognition of an antigen by the specific receptors on B or T cells, and start the adaptive immune response. A very big task. The interesting part is that those proteins practically appear in vertebrates, because the adaptive immune system starts in jawed fishes. So, I have made the usual analysis for the information jump in vertebrates of these three proteins. Here are the results, that are rather impressing, especially for CARD11: CARD11: absolute jump in bits: 1280; in bits per aminoacid (bpa): 1.109185 BCL10: absolute jump in bits: 165.1; in bits per aminoacid (bpa): 0.7085837 MALT1: absolute jump in bits: 554; in bits per aminoacid (bpa): 0.6723301 I am adding to the OP a graphic that shows the evolutionary history of those three proteins, in terms of human conserved information.gpuccio
July 16, 2019
July
07
Jul
16
16
2019
07:22 AM
7
07
22
AM
PST
ET at #113:
I don’t see any issues with it.
Well, I do. Let's say that we have different ideas about that.
And for every genetic disease there are probably thousands of changes that do not cause one.
Of course. And they are called neutral or quasi neutral random mutations. When they are present in more than 1% of the whole population, they are called polymorphisms.gpuccio
July 16, 2019
July
07
Jul
16
16
2019
07:00 AM
7
07
00
AM
PST
Silver Asiatic at #111
I think we have to say that we do not know. As previously, you stated that we do not know how complex the designer is. An algorithm is method of calculation which would be resident in the mind of the designer. The level of complexity of that algorithm, for a designer capable of creating life on earth, does not seem to be a problem.
I disagree. Algorithms, as I have already explained, are configurations of material objects. We were discussing algorithms on our planet, not imaginary algorithms in the mind of a conscious agent of whom we know almost nothing. My statement was about a real alforithm really implemented in material objects. To compute ATP synthase, that algorithm would certainly be much more complex than ATP stnthase itself. But all these reasonings are silly. We have no example of algorithms in nature, even in the biological world, which do compute new complex functional objects. Must we still waste our time with fairy tales?
The algorithm could be computed by an immaterial entity. The designer, I think you’re saying, created immaterial consciousnesses (human) so could create immaterial algorithms that programmed life from the beginning. So, there would be one single Design act, and then everything after that is an output.
OK, I hope it's clear that this is the theory I am criticizing. Certainly not mine. And I have never said, or discussed, that "The designer created immaterial consciousnesses (human)". As said, ID can say nothing about the nature of consciousness. ID just says that functional information derives from consciousness. And the designer needs not have "created" anything. Design is not creation. The designer designs biological information. Not human consciousness, or any other consciousness, Not "immaterial algorithms". design is the configuration of material objects, starting from cosncious representations of the designer. As said so many times.
If the computing agent is immaterial then you could have no scientific evidence of it.
Not true, as said. Immaterial realities that cause observable facts can be inferred from those facts. Instead, a physical algorithm existing on our planet should leave some trace of its physical existence. This was my simple point.
You propose an immaterial designer — it is subject to conditions of space and time? In any case, that proposal can have no scientific value. Science cannot directly investigate immaterial entities. Science can look at effects of entities, but cannot evaluate them.
Not having a physical body does not necessarily mean that an entity is not subject to space and time. The interventions of the designer on matter are certainly subject to those things. About science, I have already answered. Indeed, ID is not evaluating anything about the designer, except for what can be inferred by the observable effects of his interventions.gpuccio
July 16, 2019
July
07
Jul
16
16
2019
06:54 AM
6
06
54
AM
PST
Gp states, " I think that at present universality seems more likely, but I am not really sure. I think the question remains open." Thank you very much for at least admitting that degree of humility on your part.bornagain77
July 16, 2019
July
07
Jul
16
16
2019
06:39 AM
6
06
39
AM
PST
Silver Asiatic at #110:
I think what science can show is that 1.7 million bits of FI appear. What is not shown is how they appeared there.
Well, when you have facts science has to propose hygpotheses to explain them. Neo-darwinism is one hypothesis, and it does not explain what it should explain. Design is another hypothesis. You can't just say: it happened, and not try to explain it. That's not science.
Regarding a complex algorithm, the designing-mind itself had to have immense capabilities. Algorithms are programmatic functions which start in a mind. Organisms could have been programmed to trigger innovations over time.
Everything is possible. But my points are: a) There is no trace of those algorithms. They are just figments of the imagination. b) There are severe limits to what an algorithm can do. An algorithm cannot find solutions to problems for which it has not been programmed to find solutions. An algorithm just computes. Only consciousness has cognitive representations, understanding and purpose. Regarding innovations, I am afraid they are limited to what Behe describes, plus maybe some limited cases of simple computational adaptation. Innovations exist, but they are always simple.
Here is where it starts to get difficult. On the same basis that we say that there is no evidence of physical designers, we have to say there is no evidence if immaterial designers. Science cannot evaluate immaterial entities. So, our speculations here take us outside of science. I don’t think we can say that we have empirical evidence of immaterial entities or beings. The absence of evidence (in this case of physical designers) does not mean that we have direct evidence of immaterial designers.
I strongly disagree. Here you are indeed assuming methodologic naturalism, something that I consider truly bad philosophy of science (even if I have been recently accused of doing exactly that). Science can investigate anything that produces observable facts. In no way it is limited to "matter". Indeed, many of the most important concepts in science have nothing to do with matter. Abd science does debate ideas and realities about which we still have no clear understanding, see dark matter and especially dark energy. Why, Because those things, whatever they may be, seem to have detectable effects, to generate facts. Moreover, consciousness is in itself a fact. It is subjectively perceived by each of us (you too, I suppose). Therefore is can and must be investigated by science, Even if, at present, science has no clear theory about what consciousness is. Design is an effect of consciousness. There is no evidence that consciousness need to be physical. Indeed, there are good evidences of the constrary, but I will not discuss them now. However, design, functional information and consciousness are certainly facts that need to be investigated by science. Even if the best explanation, maybe the only one, is the intervention of some non physical conscious agent.
That is good. We do not know if there is one or multiple designers, or if the designer of life is the same as the one who developed and changed life. But some designing intelligence existed before life on earth did.
Correct.
That designer would not be a terrestrial, biological entity.
Not physical, therefore not biological. Terrestrial? I don't know. a non physical entity could well, in principle be specially connected to out planet. Or not, of course. If we don't know we don't know.
I don’t think we have any direct, scientific experience with an immaterial, pre-biological conscious entity. Additionally, we do not see that human consciousness can create life, for example, or that it could mutate bacteria to create plants, birds, fish, animals and other humans. We don’t see that human consciousness can intervene and create other consciousnesses. We might say that the entire population of human beings has affected the earth – would this suggest that there is a huge population of designers affecting mutations?
You seem to make some confusion about three different concepts: functional information, life and consciousness. ID is about the origin of functional information, in particular the functional information we observe in living organisms. It can say nothing about what life and consciousness are, least of all about how to generate those things. Functional information is a configuration of material objects to implement some function in the world we observe. Nothing else. Complex functional information originates only from conscious agents (we know that empirically), but it tells us nothing about what consciousness is or how it is generated. And life itself cannot easily be defined, and it is probably more than the information it needs to exist. As humans, we can design functional information. We can also design biological functional information, even rather complex. OK, we are not really very good. We cannot design anything like ATP synthase. But, in time, we can improve. Designers can design complex functional information. More or less complex, good or bad. But they can do it. But human designers, at present, cannot generate life. Indeed. we don't even know what life is. Even more that is true of consciousness. And again, I don't think we can say how many designers have contributed to biological design. Period.
Even if it is only cells where there were innovations that seems to be quite a lot of intervention.
It is a lot of intervention. And so?
I think this cuts against your concern about complex algorithms. The designer may be very complex. Algorithms created by the designer may be complex also.
He could also be very simple.
I do not think that science has established that human consciousness is a non-physical entity, or that human consciousness can exist separated from a body.
Science has established practically nothing about the nature of consciousness. But there is time. Certainly, it has not established that consciousness derives fron the physical body.
The options I see for this introduction of information are: 1. Direct creation of vertibrates 2. Guided or tweaked mutations 3. Pre-programmed innovations that were triggered by various criteria 4. Mutation rates are not constant but can be accelerated at times 5. We don’t know
5 is true enough, but after that 2 is the only reasonable hypothesis. Intelligent selection can have a role too, of course, like in human protein engineering. But I think that transposons act as a form of guided mutation.gpuccio
July 16, 2019
July
07
Jul
16
16
2019
06:37 AM
6
06
37
AM
PST
gpuccio:
Do you really believe that any generic algorithm would design such a thing, if the designer does not code the idea and a lot of details in the algorithm itsef?
I don't see any issues with it. There is a Scientific America article from over a decade ago titled "Evolving Inventions". One invention had a transistor in it that did not have its output connected to anything. The point being is the only details required are what is needed to get the job done, ie connecting a "P" to ADP.
But there are a lot of facts that tell us that most mutations are random, neutral or deleterious. I have mentioned the many human diseases causes by mutations that follow no specific pattern, botn normal mutations and transposon associated.
And for every genetic disease there are probably thousands of changes that do not cause one.ET
July 16, 2019
July
07
Jul
16
16
2019
06:35 AM
6
06
35
AM
PST
ET at #109:
The Designer is never seen.
Correct. But, as I have said, the designer needs not be physical. I believe that consciousness can exist without being necessarily connected to a physical body. I have explained at #101 (to SilverAsiatic). I quote myself: “Is the designer a biological organism? Is the designer a physical entity?” I will answer these two together. While we cannot say who or what the designer (or designers) is, I find very reasonable that he should not be a physical organism. The reason for that is, again, empirical, and is similar to my “confutation” of the imaginary algorithm: if one or more physical designers had been acting on our planet throughout natural history, we should have some evidence of that. But there is none. So the best hypothesis is that the designer or designers are not physical like us. An algorithm, instead, needs to be physically instantiated. An algorithm is not a conscious agent. It works like a machine. It need a physical "body" to exist and work.
The way ATP synthase works, by squeezing the added “P” onto ADP and not by some chemical reaction, is a clue- for me, anyway. It just seems like something an algorithm would tease out- and that comes from knowledge of many GA’s that have created human inventions.
ATP synthase squeezes the P using mechanical force from a proton gradient. It works like a water mill. Do you really believe that any generic algorithm would design such a thing, if the designer does not code the idea and a lot of details in the algorithm itsef? Algorithms compute, and do nothing else. They are sophisticated abacuses, nothing more. The amazing things that they do are simply due to the specific cponfigurations designed for them by conscious intelligent beings. Maybe the designer needed some algorithm to do the computations, if his computing ability is limited, like ours. Maybe not. But, if he used some algorithm, it seems not to have happened on this planet, ot he accurately destroyed any trace of it. Don't you think that these are just ad hoc reasonings?
I would love to see how you made that determination, especially in the light of the following:
I am not aware that waht Spetner says is true by default. Again, I don't know his thought in detail, and I don't want to judge. But there are a lot of facts that tell us that most mutations are random, neutral or deleterious. I have mentioned the many human diseases causes by mutations that follow no specific pattern, botn normal mutations and transposon associated. See comments #64 and #96. The always precious Behe has clearly shown that differentiation at low level (let's say inside families) is just a matter of adaptation thorugh loss of information, never a generation of new functional information. To be clear, the loss of information is random, due to deleterious mutations, and the adaptation id favoured by an occasionl advantage gained in specific environments, therefore to NS. This is the level where the neo-darwinian model works. But without generating any new functional information. Just by losing part of it. This is Behe's model (see polar bears). And it is mine, too. For the rest, actual design is always needed.gpuccio
July 16, 2019
July
07
Jul
16
16
2019
05:58 AM
5
05
58
AM
PST
GP
So, I think we can agree that any algorithm that can compute the sequences for ATP synthase would be, by far, more complex than ATP synthase itself.
I think we have to say that we do not know. As previously, you stated that we do not know how complex the designer is. An algorithm is method of calculation which would be resident in the mind of the designer. The level of complexity of that algorithm, for a designer capable of creating life on earth, does not seem to be a problem.
So, he designs the algorithm. But, of course, he must implement it is a material object. A material object that can do the computations and then build the compute outcome (IOWs, ATP synthase).
The algorithm could be computed by an immaterial entity. The designer, I think you're saying, created immaterial consciousnesses (human) so could create immaterial algorithms that programmed life from the beginning. So, there would be one single Design act, and then everything after that is an output.
OK, so my simple question is: where is, or was, that object? The computing object? I am aware of nothing like that in the known universe.
If the computing agent is immaterial then you could have no scientific evidence of it.
So, what are we hypothesizing, that 4 billion years ago the designer designed and implemented some extremely complex organism capable of computing ATP synthase, only to compute ATP syntase for bacteria, and that such a complex organism then disappeared without leaving any trace of itself?
I think we are saying that science cannot know this. Additionally, you refer to "the designer" but there could be millions of designers. Again, science cannot make a statement on that.
What’s the sense of such a scenario? What scientific value has it? The answer is simple: none.
You propose an immaterial designer -- it is subject to conditions of space and time? In any case, that proposal can have no scientific value. Science cannot directly investigate immaterial entities. Science can look at effects of entities, but cannot evaluate them.
Of course, the designer designed ATP synthase when it was needed, and not some mysterious algorithm, never seen, to compute its information.
I don't think that conclusion is obvious. Why did the design have to occur when needed and not before. And again the algorithm could have been administered by an immaterial agent, which we never could observe scientifically. There's no way for science to know this.Silver Asiatic
July 16, 2019
July
07
Jul
16
16
2019
05:48 AM
5
05
48
AM
PST
Gpuccio Thank you for your detailed replies on some complex questions. You explained your thoughts very clearly and well.
Of course, there is instead no evidence at all that the organisms that existed before included any complex algorithm capable of computing those 1.7 million bits of functional information.
I think what science can show is that 1.7 million bits of FI appear. What is not shown is how they appeared there. Regarding a complex algorithm, the designing-mind itself had to have immense capabilities. Algorithms are programmatic functions which start in a mind. Organisms could have been programmed to trigger innovations over time.
While we cannot say who or what the designer (or designers) is, I find very reasonable that he should not be a physical organism. The reason for that is, again, empirical, and is similar to my “confutation” of the imaginary algorithm: if one or more physical designers had been acting on our planet throughout natural history, we should have some evidence of that. But there is none. So the best hypothesis is that the designer or designers are not physical like us.
Here is where it starts to get difficult. On the same basis that we say that there is no evidence of physical designers, we have to say there is no evidence if immaterial designers. Science cannot evaluate immaterial entities. So, our speculations here take us outside of science. I don't think we can say that we have empirical evidence of immaterial entities or beings. The absence of evidence (in this case of physical designers) does not mean that we have direct evidence of immaterial designers.
This is easy. A designer was certainly responsible for OOL on our planet. OOL is of course one of the events that scream design with the highest strength. So the answer is yes: the designer, or at least the designer who designed life on our planet, certainly existed before.
That is good. We do not know if there is one or multiple designers, or if the designer of life is the same as the one who developed and changed life. But some designing intelligence existed before life on earth did. That designer would not be a terrestrial, biological entity.
Well, we humans, as conscious beings, are entities capable of intervening inside the functions of most cells in our brain or nervous system, adn amny in our bodies. That’s how our consiousness is interfaced to our body. Why shouldn’t some other conscious entity be able to do something similar with biological organisms?
I don't think we have any direct, scientific experience with an immaterial, pre-biological conscious entity. Additionally, we do not see that human consciousness can create life, for example, or that it could mutate bacteria to create plants, birds, fish, animals and other humans. We don't see that human consciousness can intervene and create other consciousnesses. We might say that the entire population of human beings has affected the earth - would this suggest that there is a huge population of designers affecting mutations?
And again, there is no need that the interface reach all cells of all organisms. The strict requirement is for those organisms where the design takes place.
I'd think that the activity of mutations within organisms is such that a continual monitoring would be required in order to achieve designed effects, but perhaps not. Even if it is only cells where there were innovations that seems to be quite a lot of intervention.
We don’t know. How complex is our consciousness, if separated from our body? We don’t know how complex non physical entities need to be. Maybe the designer is very simple. Or not.
I think this cuts against your concern about complex algorithms. The designer may be very complex. Algorithms created by the designer may be complex also. Additionally, I do not think that science has established that human consciousness is a non-physical entity, or that human consciousness can exist separated from a body.
I don’t know when or how the designer decides things. But I know when he does things. For example, he introduced the functional information for vertebrates, all those 1.7 million bits, in some pre-existing organism (probably the first chordates), approximately in those 20 million years when vertebrates appear on earth.
The options I see for this introduction of information are: 1. Direct creation of vertibrates 2. Guided or tweaked mutations 3. Pre-programmed innovations that were triggered by various criteria 4. Mutation rates are not constant but can be accelerated at times 5. We don't knowSilver Asiatic
July 16, 2019
July
07
Jul
16
16
2019
05:38 AM
5
05
38
AM
PST
gpuccio:
Of course, the designer designed ATP synthase when it was needed, and not some mysterious algorithm, never seen, to compute its information.
The Designer is never seen. The point of the algorithm was to address the "how" the Intelligent Designer designed living organisms and their complex parts and systems. The way ATP synthase works, by squeezing the added "P" onto ADP and not by some chemical reaction, is a clue- for me, anyway. It just seems like something an algorithm would tease out- and that comes from knowledge of many GA's that have created human inventions.
That would still leave 99.7% of all mutations that could be random. Indeed, that are random.
I would love to see how you made that determination, especially in the light of the following:
He [the Designer] indeed seems to have “carefully crafted” information in His species giving them the ability to respond to environmental stimuli to alter their own genome to adapt to new environments. He then evidently let them wander where they will with the ability to adapt.- Dr. Lee Spetner “the Evolution Revolution” p 108
ET
July 16, 2019
July
07
Jul
16
16
2019
05:28 AM
5
05
28
AM
PST
PeterA: Maybe you can look at this more detailed figure for the different stimuli, receptors and receptor connections to the activation pathway: https://rockland-inc.com/nfkb-signaling-pathway.aspxgpuccio
July 16, 2019
July
07
Jul
16
16
2019
05:17 AM
5
05
17
AM
PST
Pw: "Could the answer include the following issues?" Yes, of course. "You have pointed to the intentional insertion of transposable elements into the genetic code asanother empirical evidence. I think you’ve also mentioned the splicing mechanisms. Perhaps any of the complex functional mechanisms that appeared at some points could be attributed to conscious intentional design?" All of them, if they are functionally complex. That's the theory. That's ID. The procedure, if correctly applied, should have no false positives. "Does CD stand for common design or common descent with designed modifications?" CD stands just for "common descent". I suppose that each person can add his personal connotations. Possibly making them explicit in the discussion. I have explained that for me common descent just means a physical continuity between organisms, but that all new complex functional information is certainly designed. Without exceptions. So, I suppose that "common descent with designed modifications" is a good way to put it. Just a note about the universality. Facts are very strong in supporting common descent (in the sense I have specified). It remains open, IMO, if it is really universal: IOWs, if all forms of life have some continuity with a single original event of OOL, or if more than one event of OOL took place. I think that at present universality seems more likely, but I am not really sure. I think the question remains open. For example, some differences between bacteria and archea are rather amazing. "Does “common” relate to the observed similarities ?" Common, in my version of CD, refers to the physical derivation (for existing information) from one common ancestor. So, let's say that at some time there was in the ocean a common ancestor of vertebrates: maybe some form of chordate. And at some time, vertebrates are already split into cartilaginous fish and bony fish. If both cartilaginous fish and bony fish physically reuse the same old information from a common ancestor, that is common descent, even of course all the new information is added by specific design. I really don't understand how that could be explained without any form of physical descent. Do they really believe that cartilaginous fish were designed from scratch, from inanimate matter, and that bony fish were too designed from scratch, from inanimate matter, but separately? And that the supposed ancestor, the first chordates, were also designed from scratch? And the first eukaryotes? And so on?gpuccio
July 16, 2019
July
07
Jul
16
16
2019
04:55 AM
4
04
55
AM
PST
PeterA: "Is it possible that such an explicit graphic illustration, which includes so many details, is a simplification of the real thing?" Of course it is. A gross simplification. many important details are missing. For example: Only two kinds of generic signals and receptors are shown. As we have seen, there are a lot of different specific receptors. The pathways that connect each specific type of receptor to IKK are not shown (they are shown as simple arrows). But they are very complex and specific. I have given some limited information in the OP and in the discussion. Only the canonical pathway is shown. Only the most common type of dimer is shown. Coactivators and interactions with other pathways are not shown or barely mentioned. Of course, lncRNAs are not shown. And so on. Of course, the figure is there just to give a first general idea of the system.gpuccio
July 16, 2019
July
07
Jul
16
16
2019
04:29 AM
4
04
29
AM
PST
Upright BiPed: "Illuminating thread otherwise." Thank you! :)gpuccio
July 16, 2019
July
07
Jul
16
16
2019
04:16 AM
4
04
16
AM
PST
GP, Fascinating topic and interesting discussion, though sometimes unnecessarily personal. Scientific discussions should remain calm, focused in on details, unbiased. At the end we want to understand more. Undoubtedly biology today is not easy to understand well in all details and it doesn’t look like it could get easier anytime soon. Someone asked: “What evidence do we have of a designer directly intervening into biology?” Could the answer include the following issues? OOL, prokaryotes, eukaryote, and according to Dr Behe, who said that at one point he would point to the class level, now he would focus it on at least at the family level, where the Darwinian paradigm lacks explanatory power for the physiological differences between cats and dogs allegedly proceeding from a common ancestor. You have pointed to the intentional insertion of transposable elements into the genetic code asanother empirical evidence. I think you’ve also mentioned the splicing mechanisms. Perhaps any of the complex functional mechanisms that appeared at some points could be attributed to conscious intentional design? Does CD stand for common design or common descent with designed modifications? Does “common” relate to the observed similarities ? For example, in the case of cats and dogs, “common” relates to their observed anatomical and/or physiological similarities, which were mostly designed too?pw
July 16, 2019
July
07
Jul
16
16
2019
03:30 AM
3
03
30
AM
PST
.
Luckily, some friends are ready to be fiercely antagonistic!
Yes, I see that. Illuminating thread otherwise.Upright BiPed
July 15, 2019
July
07
Jul
15
15
2019
08:41 PM
8
08
41
PM
PST
GP, The first graphic illustration shows the mechanism of NF-kB action, which you associated with the canonical activation pathway "summarized" in figure 1. The figure 1 -without breaking it into more details- could qualify as a complex mechanism. Is it possible that such an explicit graphic illustration, which includes so many details, is a simplification of the real thing? Aren't all the control procedures associated with this mechanism shown in the figure? Are any important details missing, or just irrelevant details? Well, you answered those question when you elaborated on those details in the OP. In this particular example, we first see the "signals" shown in figure 1 under the OP section "The stimuli". Thus, what in the figure 1 appears as a few colored objects and arrows is described in more details, showing the tremendous complexity of each step of the graphic, specially the receptors in the cell membrane. Can the same be said about every step within the figure?PeterA
July 15, 2019
July
07
Jul
15
15
2019
04:11 PM
4
04
11
PM
PST
Silver Asiatic:: You also say: "What evidence do we have of a designer directly intervening into biology?" That's rather simple. The many examples, well known, of sudden appearance in natural history of new biological objects full of tons of new complex functional imformation, information that did not exist at all before. For example, I have analyzed quantitatively the transition to vertebrates, which happened more than 400 million years ago, in a time window of probably 20 million years, and which involved the appearance, for the first time in natural history, of about 1.7 million bits of new functiona information. Information that, after that time, has been conserved up to now. This is the evidence of a design intevrentio, specifically localized in time. Of course, there is instead no ecidence at all that the organisms that existed before included any complex algorithm capable of computing those 1.7 million bits of functional information. You say: "Well, I think we could try to infer more than that – or not? Is the designer a biological organism? Or did the designer exist before life on earth existed? Is the designer a physical entity? What facts show that an entity is capable of intervening physically inside of the functions of every cell of every living being on earth? How complex is the designer? More complex than the algorithms you mentioned? Does the designer move from one cell present in Southern California, USA and then travel to intervene in another cell in Tanzania? Or does the designer do such interventions simultaneously? In either answer, are there some facts that show what the designer does in these cases? If simultaneously, how big is the designer and what mechanisms are used to intervene simultaneously into billions of cells at various points of time? Does the designer decide to intervene minute-by-minute based on various criteria? Or are the interventions pre-planned. Does the designer use tools to carry out interventions? Or does he have appendages that enable him to tweak mutations (like with his fingers)?" These are good questions. To many of them, we cannot at present give answers. But not all. "Is the designer a biological organism? Is the designer a physical entity?" I will answer these two together. While we cannot say who or what the designer (or designers) is, I find very reasonable that he should not be a physical organism. The reason for that is, again, empirical, and is similar to my "confutation" of the imaginary algorithm: if one or more physical designers had been acting on our planet throughout natural history, we should have some evidence of that. But there is none. So the best hypothesis is that the designer or designers are not physical like us. "Did the designer exist before life on earth existed?" This is easy. A designer was certainly responsible for OOL on our planet. OOL is of course one of the events that scream design with the highest strength. So the answer is yes: the designer, or at least the designer who designed life on our planet, certainly existed before. "What facts show that an entity is capable of intervening physically inside of the functions of every cell of every living being on earth?" Well, we humans, as conscious beings, are entities capable of intervening inside the functions of most cells in our brain or nervous system, adn amny in our bodies. That's how our consiousness is interfaced to our body. Why shouldn't some other conscious entity be able to do something similar with biological organisms? And again, there is no need that the interface reach all cells of all organisms. The strict requirement is for those organisms where the design takes place. "How complex is the designer? " We don't know. How complex is our consciousness, if separated from our body? We don't know how complex non physical entities need to be. Maybe the designer is very simple. Or not. This answer is valid for many other questions: we don't understand, at present, how consciousness can work outside of a physical body. Maybe we will understand more in the future. "Does the designer decide to intervene minute-by-minute based on various criteria? Or are the interventions pre-planned." I don't know when or how the designer decides things. But I know when he does things. For example, he introduced the functional information for vertebrates, all those 1.7 million bits, in some pre-existing organism (probably the first chordates), approximately in those 20 million years when vertebrates appear on earth. "Does the designer use tools to carry out interventions? Or does he have appendages that enable him to tweak mutations (like with his fingers)?" Most likely he uses tools. Of course the designer's consciousness needs to interface with matter, otherwise no design could be possible. That is exactly what we do when our consciousness interfaces with our brain. So, no big problem here. The interface is probably at quantum level, as it is probably in our brains. There are many events in cells that could be more easily tweaked at quantum level in a consciousness related way. Penrose believes that a strict relationship exists in our brain between consciousness and microtubules in neurons. Maybe. I think, as I have said many times, that the most likely tool of design that we can identify at present are transposons. The insertions of transposons, usually random (see my previous posts), could be easily tweaked at quantum level by some conscious intervention. And there is some good evidence that transposons are involved in the generation of new functional genes, even in primates. That's the best I can do to answer your questions. Just a note: my answers here are highly tentative, but they are strictly empirical. They do not derive from any theological agenda. These are the ideas that, in my opinion, are more reasonable to explain the facts as we know them. Right or wrong that they may be, this is the spirit in which I express them.gpuccio
July 15, 2019
July
07
Jul
15
15
2019
03:13 PM
3
03
13
PM
PST
Silver Asiatic: It's not really question of knowing what is in the mind of the designer. The problem is: what is in material objects? Let's go back to ATP synthase. Please, read my comment #74. So, I think we can agree that any algorithm that can compute the sequences for ATP synthase would be, by far, more complex than ATP synthase itself. So, let's say, just for as moment, that the designer does not design ATP synthase directly. Let's say that the designer designs the algorithm. After all, he is clever enough. So, he designs the algorithm. But, of course, he must implement it is a material object. A material object that can do the computations and then build the compute outcome (IOWs, ATP synthase). OK, so my simple question is: where is, or was, that object? The computing object? I am aware of nothing like that in the known universe. Maybe it existed 4 billion years ago, and now it is lost? Well, everything is possible, but what facts support such an idea? None at all. Have we traces of that algorithm, indications of how it worked? Have we any idea of the object where it was implemented? It seems reasonable that it was some biologcial object, probably an organism. So, what are we hypothesizing, that 4 billion years ago the designer designed and implemented some extremely complex organism capable of computing ATP synthase, only to compute ATP syntase for bacteria, and that such a complex organism then disappeared without leaving any trace of itself? What's the sense of such a scenario? What scientific value has it? The answer is simple: none. Of course, the designer designed ATP synthase when it was needed, and not some mysterious algorithm, never seen, to compute its information. And there is more: such a complex algorithm, made to compute ATP synthase, could not certainly compute another, completely different, protein system, like for example the spliceosome. Because that's another function, another plan. A completely different computation would be needed, a different purpose, a different context. So, what do we believe? That the designer designed, later, another complex organism with another comples algorithm to compute and realize the spliceosome? And the immune system? And our brain? Or that, in the beginning, there wa one organism so complex that it could compute the sequence of all future necessary proteins, protein systems, lncRNAs, and so on? A monster of which no trace has remained? OK, I hope that's enough.gpuccio
July 15, 2019
July
07
Jul
15
15
2019
02:41 PM
2
02
41
PM
PST
Gp states, "That would still leave 99.7% of all mutations that could be random. Indeed, that are random." LOL, just can't accept the obvious can he? Bigger men than you have gone to their deaths defending their false theories Gp. :)
“It is difficult (if not impossible) to find a genome change operator that is truly random in its action within the DNA of the cell where it works. All careful studies of mutagenesis find statistically significant non-random patterns” James Shapiro – Evolution: A View From The 21st Century – (Page 82)
To presuppose that the intricate molecular machinery in the cell is just willy nilly moving stuff around on the genome is absurd on its face. And yet that is ultimately what Gp is trying to argue for. Of note: It is not on me to prove a system is completely deterministic in order to falsify Gp's model. I only have to prove that it is not completely random in order to falsify his model. And that threshold has been met. Perhaps Gp would also now like to still defend the notion that most (+90%) of the genome is junk?bornagain77
July 15, 2019
July
07
Jul
15
15
2019
12:19 PM
12
12
19
PM
PST
GP Good answer, thank you.
But maybe the software makes computation whose result was not previously known to the designer. that deos not change anything, The computation process has been designed anyway. And computations are algorithmic, they do not increase the Kolmogorov complexity of the system. And that complexity is the functional complexity.
Yes, but I think this answers your question about a Designer who created algorithms. In a software output, it can be programmed to create information that was not known to the designer. That information actually causes other things to happen. I would think that it is the definition of complex, specified, functional information. We observe the software creating that information, and rightly infer that the information network (process) was designed. But do we, or can we know that the designer was unaware of what the software produced? I don't think so. We do not have access to the designer's mind. We only see the software and what it produces. We know it is the product of design. But we do not know if the functional information was designed for any specific instance, or if it is the output of a previous design farther back, invisible to us. This, I think, is the case in biology. I believe you are saying that the design occurs at various discrete moments where a designer intervenes, and not that the design occurred at some distant time in the past and is merely being worked out by "software". What we observe shows functional information, but this information may either be created directly by the designer at the moment, or it may be an output of a designed system. I do not see how we could distinguish between the two options. With software, we can observe the inputs and calculations and we can determine that the software created something "new"'. It is all the output of design, but we can trace what the software is doing and therefore infer where the "design implementation" took place. It's that term that is the issue here, really. It is "design implementation". Where and when was the design (in the mind of the designer) put into biology? I do not believe that is a question that ID proposes an answer for, and I also do not believe it is a scientific question.Silver Asiatic
July 15, 2019
July
07
Jul
15
15
2019
11:07 AM
11
11
07
AM
PST
Silver Asiatic: "I don’t quite follow that. We create software that evaluates data and then produces functional information (visualizations). So, the design of that software happened when the visualization occurred? I think we normally say that the design occurred first in the mind of the software designer – it’s an idea (Mozart wrote symphonies entirely in his mind before putting on paper). Then, the designer creates algorithms that produce functional information. But the software is not the designer. It is the output of a designed process." I perfectly agree. The designed object here is the software. The design happens when the designer writes the software, from his mind. I see your problem. let's be clear. The software never designs anything, because it is not conscious. Design, by definition, is the output of form from consiousness to a materila object. But you seem to believe that the siftware creates new functional information. Well, it does in a measure, but it is not new complex functional information. this is a point that is often misunderstood. Let's say that the software produces visualizations exactly as programmed to do. In that case, it is easy. All the functional information that we get has been designed when the software was designed. But maybe the software makes computation whose result was not previously known to the designer. that deos not change anything, The computation process has been designed anyway. And computations are algorithmic, they do not increase the Kolmogorov complexity of the system. And that complexity is the functional complexity. Finally, maybe the software uses new information from the environment. In that case, there will be some increse in functional information, but it will be very low, if the environment does not contain complex functional information. IOWs, the environment cannot teach a system how to build ATP synthase, except when the sequence of ATP syntghase (or, for that, of a Shakespeare sonnet in the case of language) is provided externally to the system. Now I must go. More in next post.gpuccio
July 15, 2019
July
07
Jul
15
15
2019
10:53 AM
10
10
53
AM
PST
To all: OK, now let's talk briefly of transposons. It's really strange that transposons have been mentioned here as a confutation of my ideas. But life is strange, as we all know. The simple fact is: I have been arguing here for years that transposons are probably the most important tool of intelligent design in biology. I remember that an interlocutor, some time ago, even accused me of inventing the "God of transposons". The simple fact is: there are many facts that do suggest that transposon activity is responsible for generating new functional genes, new functional proteins. And I think that the best intepretation is that transposon activity can be intelligently directed, in some cases. IOWs, if biological design is, at least in part, implemented by guided mutations, those guided mutations are probably the result of guided transposon activity. We have no certainty of that, but it is a very reasonable scenario, according to known facts. OK, but let's put that into perspective, especially in relation to the confused and confounding statements that have been made or reported here about "random mutations". I will refer to the following interesting article: The Impact of Transposable Elements in Genome Evolution and Genetic Instability and Their Implications in Various Diseases https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4196381/ So, the first question that we need to answer is: a) How frequent are transposon-dependent mutations in relation to all other mutations? There is an answer to that in the paper:
Recent studies have revealed the implications of TEs in genomic instability and human genome evolution [44]. Mutations associated with TE insertions are well studied, and approximately 0.3% of all mutations are caused by retrotransposon insertions [27
0.3% of all mutations. So, let's admit for a moment that transposon derived mutations are not random, as it has been siggested in this thread. That would still leave 99.7% of all mutations that could be random. Indeed, that are random. But let's go on. I have already stated that I believe that transposons are an important tool of design. Therefore, at least some of transposon activity must be intelligently guided. But does that mean that all transposon activity is guided? Of course, absolutely not. I do believ that most transposon activity is random, and is not guided. Let's read again from the paper:
Such insertions can be deleterious by disrupting the regulatory sequences of a gene. When a TE inserts within an exon, it may change the ORF, such that it codes for an aberrant peptide, or it may even cause missense or nonsense mutations. On the other hand, if it is inserted into an intronic region, it may cause an alternative splicing event by introducing novel splice sites, disrupting the canonical splice site, or introducing a polyadenylation signal [8, 9, 10, 11, 42, 43]. In some instances, TE insertion into intronic regions can cause mRNA destabilization, thereby reducing gene expression [45]. Similarly, some studies have suggested that TE insertion into the 5' or 3' region of a gene may alter its expression [46, 47, 48]. Thus, such a change in gene expression may, in turn, change the equilibrium of regulatory networks and result in disease conditions (reviewed in Konkel and Batzer [43]). The currently active non-LTR transposons, L1, SVA, and Alu, are reported to be the causative factors of many genetic disorders, such as hemophilia, Apert syndrome, familial hypercholesterolemia, and colon and breast cancer (Table 1) [8, 10, 11, 27]. Among the reported TE-mediated genetic disorders, X-linked diseases are more abundant than autosomal diseases [11, 27, 45], most of which are caused by L1 insertions. However, the phenomenon behind L1 and X-linked genetic disorders has not yet been revealed. The breast cancer 2 (BRCA2) gene, associated with breast and ovarian cancers, has been reported to be disrupted by multiple non-LTR TE insertions [9, 18, 49]. There are some reports that the same location of a gene may undergo multiple insertions (e.g., Alu and L1 insertions in the adenomatous polyposis coli gene) (Table 1).
And so on. Have we any reason to believe that that kind of transposon activity is guided? Not at all. It just behaves like all other random mutations, that are oftne cause of genetic diseases. Moreover, we know that deleterious mutations are only a fraction of all mutations. Most mutations, indeed, are neutral or quasi neitral. Therefore, it is absolutely reasonable that most transposon induced mutations are neutral too. And the design? The important point, that can be connected to Abel's important ideas, is that functional design happens when an intelligent agnet acts to give a functional (and absolutely unlikely) form to a number of "configurable switches". Now, the key idea here is that the switches must be configurable. IOWs, if they are not set by the designer, their individual configuration is in some measure indifferent, and the global configuration can therefore be described as random. The important point here is that functional sequences are more similar to random sequences than to ordered sequences. Ordered sequences cannot convey the functional information for complex function, because they are constrained by their order. Functional sequences, instead, are pseudo-random (not completely, of course: some order can be detected, as we know well). That relative freedom of variation is a very good foundation to use them in a designed way. So, the idea is: transposon activity is probably random in most cases. In some cases, it is guided. Pribably through some qunatum interface. That's also the reason why a quantum interface is usually considered (by me too) as the best interface between mind and matter: because quantum phenomena are, at one level, probabilistic, random, and that's exactly the reason why they can be used to implement free intelligent choices. To conclude, I will repeat, for the nth time, that a system is a random system when we cannot describe it deterministically, but we can proved some relatively efficient and useful description of it using a probability distribution. There is no such a thing as "complete randomness". If we use a probability distributuon to describe a system, we are treating that system as a random system. Randomness is not an intrinsic property of evens (except maybe at quantum level). A random syste, like the tossing of a coin, is completely deterministic in essence. But we are not able to describe it deterministically. In the same way, random systems that do not follow an uniform distribution are random just the same. A loaded dice is as rando as a fair dice. But, if the laoding is so extreme that only one event can take place, that becomes a necessity system, that can very well be described deterministically. In the same way, there is nothing strange in the fact that some factrs, acting as necessity causes, can modify a probability distribution. As a random system is in reality deterministic in essence, of course is some of the variables that act in it is strong enought to be detected, that variable wil modify the probability dostribution in a detectable way. There is nothing strange in that. The system is stil random (we use a probabiliti dostribution to describe it), but we can detect one specific variable that modifies the probability distribution (what has been called here, not so precisely IMO, a bias). That's the case, for example, of radiations increasing the rate and modifying the type of random mutations, as in the great incread of leukemia cases at Hiroshima after the bomb. That has always been well known, even is some people seem to discover it only now. In all those cases, we are still dealing with random systems: systems where each single event cannot be anticipated, but a probability distribution can rather efficiently describe the system. Mutations are a random system, except maybe for the rare cases of guded mutations in the coruse of biological design. Finally, let me say that, of all the things of which I have been accused, "assuming Methodologcal Naturalism as a starting assumption" is probably the funniest. Next time, they will probably accuse me of being a convinced compatibilist! :) Life is strange.gpuccio
July 15, 2019
July
07
Jul
15
15
2019
10:42 AM
10
10
42
AM
PST
GP
design happens when the functional information is inputted into the material object we observe
I don't quite follow that. We create software that evaluates data and then produces functional information (visualizations). So, the design of that software happened when the visualization occurred? I think we normally say that the design occurred first in the mind of the software designer - it's an idea (Mozart wrote symphonies entirely in his mind before putting on paper). Then, the designer creates algorithms that produce functional information. But the software is not the designer. It is the output of a designed process.
what facts support the existence of such an independent physical algorithm in physical reality?
Again, with Mozart. The orchestra plays the symphony. Does this mean that the symphony could only be created as an independent physical text in physical reality? The facts say no - he had it in his mind. I believe you are saying that a Designer enters into the world at various specific points of time, and intervenes in the life of organisms and creates mutations or functions at those moments. What facts support the existence of those interventions in time, versus the idea that the organism was designed with the capability and plan for various changes from the beginning of the universe? What evidence do we have of a designer directly intervening into biology?
I only know that [the designer] designs biological things, and must be conscious, intelligent and purposeful.
Well, I think we could try to infer more than that - or not? Is the designer a biological organism? Or did the designer exist before life on earth existed? Is the designer a physical entity? What facts show that an entity is capable of intervening physically inside of the functions of every cell of every living being on earth? How complex is the designer? More complex than the algorithms you mentioned? Does the designer move from one cell present in Southern California, USA and then travel to intervene in another cell in Tanzania? Or does the designer do such interventions simultaneously? In either answer, are there some facts that show what the designer does in these cases? If simultaneously, how big is the designer and what mechanisms are used to intervene simultaneously into billions of cells at various points of time? Does the designer decide to intervene minute-by-minute based on various criteria? Or are the interventions pre-planned. Does the designer use tools to carry out interventions? Or does he have appendages that enable him to tweak mutations (like with his fingers)?Silver Asiatic
July 15, 2019
July
07
Jul
15
15
2019
10:30 AM
10
10
30
AM
PST
Bill Cole: Great to hear from you! :) And let's not forget lncRNAs (see comments #52, #67 and #68 here).gpuccio
July 15, 2019
July
07
Jul
15
15
2019
09:59 AM
9
09
59
AM
PST
John_a_designer: Thank you for your very thoughtful comment. Yes, in this OP and in others I have dealed mainly with eukaryotes. But of course you are right, prokaryotes are equally fascinating, maybe only a little bit simpler, and, as you say: "If you can’t explain the natural selection + random variation evolution in prokaryotes it’s game over for Neo-Darwinism. There has to be another explanation". And game over it is, because the functional complexity in prokaryotes ia already overwhelming, and can never be explained by RV + NS. It's not a case that the example I use probably most frequently is ATP synthase. And that is a bacterial protein. You describe very correctly the transcription system in prokaryotes. It's certainly much simpler than in eukaryotes, but still ots complexity is mind-boggling. I think the system of TFs is essentially eukaryotic, but of course a strict regulation is present in prokaryotes too. You mention sigma factors and rho, of course, and there is the system of activators and repressors. But there are big differences, starting form the very different organization of the bacterial chromosome (histone independent supercoiling, and so on). Sigma factors are in some way the equivalent of generic TFs. According to Wikipedia, sigma factor "is a bacterial transcription initiation factor that enables specific binding of RNA polymerase (RNAP) to gene promoters. It is homologous to archaeal transcription factor B and to eukaryotic factor TFIIB". Maybe. I have blasted sigma 70 from E. coli with human TFIIB and found no detectable homology (E value 1.4). So, there seems to be little conservation here. I have blasted the same E. coli sigma 70 against all bacteria, excluding proteobacteria (the phylum of E. coli). I would say that there is good conservarion in different types of bacteria, such as up to 1251 bits in firmicutes, 786 bits in actinobacteria, 533 bits in cyanobacteria, and so on. So, this molecule seems to be rather conserved in bacteria. I think that eukaryogenesis is one of the most astounding designed jumps in natural history. I do accept that mithocondria and plastids are derived from bacteria, and that some important eukaryotic features are mainly derived from archae, but even those partial derivations require tons of designed adjustments. And that is only the tip of the iceberg. Most eukaryotic features (the nuclear membrane and nuclear pore, chromatin organization, the system of TFs, the spliceosome, the ubiquitin system, and so on) are essentially eikaryotic, even if of course some vague precursor can be detected, in many cases, in prokaryotes. And each of these system is a marvel of original design.gpuccio
July 15, 2019
July
07
Jul
15
15
2019
09:56 AM
9
09
56
AM
PST
Hi gpuccio Thanks for the interesting post. From my study cell control comes from the availability of transcription acting molecules in the nucleus. They can be either proteins or small molecules that are not transcribed but obtained by other sources like enzyme chains. Testosterone and estrogen are examples of non transcribed small molecules. How this is all coordinated so that a living organism can reliably operate is fascinating and I am thrilled to see you start this discussion. Great to have you back :-)bill cole
July 15, 2019
July
07
Jul
15
15
2019
09:07 AM
9
09
07
AM
PST
Once again (along with others) thank you for a very interesting and evocative OP. On the other hand, as a mild criticism, I am just an uneducated layman when it comes to bio-chemistry so I am continuously trying to get up to speed on the topic. I think I get the gist of what you are saying but I imagine someone stumbling onto this site for the first time are going to find this topic way over their heads. Maybe something of a basic summary which briefly explains transcription, the role of RNA polymerase and the difference between prokaryotic and eukaryotic transcription would be helpful (or a link to such a summary if you done that somewhere else.) As for myself I think I get the gist of what you are saying but I am a little confused by differences between prokaryotic and eukaryotic transcription. (Most of my study and research has been centered on the prokaryote. If you can’t explain the natural selection + random variation evolution in prokaryotes it’s game over for Neo-Darwinism. There has to be another explanation.) For example, one question I have is, are there transcription factors for prokaryotes? According to Google, no.
Eukaryotes have three types of RNA polymerases, I, II, and III, and prokaryotes only have one type. Eukaryotes form and initiation complex with the various transcription factors that dissociate after initiation is completed. There is no such structure seen in prokaryotes.
Is that true? What about the Sigma factor which initiates transcription in prokaryotes and the Rho factor which terminates it? Isn’t that essentially what transcription factors which come in two forms, promoters and repressors, do in eukaryotic transcription? Are Sigma factors and Rho factors the same in all prokaryotes or is there a species difference? As far a termination in eukaryotes, one educational video I ran across recently (it’s dated to 2013) said that it is still unclear how termination occurs in eukaryotes. Is that true? In prokaryote there are two ways transcription is terminated: there is Rho dependent, where the Rho factor is utilized, and Rho independent, where it isn’t. Do we know anymore six years later? Hopefully answering those kinds of questions can help me and others. (Of course, they’re going to have to do some homework on their own.)john_a_designer
July 15, 2019
July
07
Jul
15
15
2019
08:34 AM
8
08
34
AM
PST
1 20 21 22 23 24 25

Leave a Reply