Intelligent Design

Controlling the waves of dynamic, far from equilibrium states: the NF-kB system of transcription regulation.

Spread the love

I have recently commented on another thread:

about a paper that (very correctly) describes cells as dynamic, far from equilibrium systems, rather than as “traditional” machines.

That is true. But, of course, the cell implements the same functions as complex machines do, and much more. My simple point is that, to do that, you need much greater functional complexity than you need to realize a conventional machine.

IOWs, dynamic, far from equilibrium systems that can be as successful as a conventional machine, or more, must certainly be incredibly complex and amazing systems, systems that defy everything else that we already know and that we can conceive. They must not only implement their functional purposes, but they must do that by “harnessing” the constantly changing waves of change, of random noise, of improbability. I have commented on those ideas in the mentioned thread, at posts #5 and #8, and I have quoted at posts #11 and #12 a couple of interesting and pertinent papers, introducing the important concept of robustness: the ability to achieve reliable functional results in spite of random noise and disturbing variation.

In this OP, I would like to present in some detail a very interesting system that shows very well what we can understand, at present, of that kind of amazing systems.

The system I will discuss here is an old friend: it is the NF-kB system of transcription factors (nuclear factor kappa-light-chain-enhancer of activated B cells). We are speaking, therefore, of transcription regulation, a very complex topic that I have already discussed in some depth here:

I will remind here briefly that transcription regulation is the very complex process that allows cells to be completely different using the same genomic information: IOWs, each type of cell “reads” differently the genes in the common genome, and that allows the different types of cell differentiation and the different cell responses in the same cell type.

Transcription regulation relies on many different levels of control, that are summarized in the above quoted OP, but a key role is certainly played by Transcription Factors (TFs), proteins that bind DNA and act as activators or inhibitors of transcription at specific sites.

TFs are a fascinating class of proteins. There are a lot of them (1600 – 2000 in humans, almost 10% of all proteins), and they are usually medium sized proteins, about 500 AA long, containing at least one highly conserved domain, the DNA binding domain (DBD), and other, often less understood, functional components.

I quote again here a recent review about human TFs:

The Human Transcription Factors

The NK-kB system is a system of TFs. I have discussed it in some detail in the discussion following the Ubiquitin thread, but I will describe it in a more systematic way here.

In general, I will refer a lot to this very recent paper about it:

Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

The NF-kB system relies essentially on 5 different TFs (see Fig. 1 A in the paper):

  1. RelA  (551 AAs)
  2. RelB  (579 AAs)
  3. c-Rel  (619 AAs)
  4. p105/p50 (968 AAs)
  5. p100/p52  (900 AAs)

Those 5 TFs work forming dimers, homodimers or heterodimers, for a total of 15 possible compbinations, all of which have been found to work in the cell, even if some of them are much more common.

Then there are at least 4 inhibitor proteins, collectively called IkBs.

The mechanism is apparently simple enough. The dimers are inhibited by IkBs and therefore they remain in the cytoplasm in inactive form.

When an appropriate signal arrives to the cell and is received by a membrane receptor, the inhibitor (the IkB molecule) is phosphorylated and then ubiquinated and detached from the complex. This is done by a protein complex called IKK. The free dimer can then migrate to the nucleus and localize there, where it can act as a TF, binding DNA.

This is the canonical activation pathway, summarized in Fig. 1. There is also a non canonical activation pathway, that we will not discuss for the moment.


Mechanism of NF-κB action. In this figure, the NF-κB heterodimer consisting of Rel and p50 proteins is used as an example. While in an inactivated state, NF-κB is located in the cytosol complexed with the inhibitory protein IκBα. Through the intermediacy of integral membrane receptors, a variety of extracellular signals can activate the enzyme IκB kinase (IKK). IKK, in turn, phosphorylates the IκBα protein, which results in ubiquitination, dissociation of IκBα from NF-κB, and eventual degradation of IκBα by the proteasome. The activated NF-κB is then translocated into the nucleus where it binds to specific sequences of DNA called response elements (RE). The DNA/NF-κB complex then recruits other proteins such as coactivators and RNA polymerase, which transcribe downstream DNA into mRNA. In turn, mRNA is translated into protein, resulting in a change of cell function.

Attribution: Boghog2 at English Wikipedia [Public domain]

Now, the purpose of this OP is to show, in greater detail, how this mechanism, apparently moderately simple, is indeed extremely complex and dynamic. Let’s see.

The stimuli.

First of all, we must understand what are the stimuli that, arriving to the cell membrane, are capable to activate the NF-kB system. IOWs, what are the signals that work as inputs.

The main concept is: the NF-kB system is a central pathway activated by many stimuli:

  1. Inflammation
  2. Stress
  3. Free radicals
  4. Infections
  5. Radiation
  6. Immune stimulation

IOWs, a wide variety of aggressive stimuli can activate the system

The extracellular signal arrives to the cell usually through specific cytokines, for example TNF, IL1, or through pathogen associated molecules, like bacterial lipopolysaccharides (LPS). Of course there are different and specific membrane receptors, in particular IL-1R (for IL1) , TNF-R (for TNF), and many TLRs (Toll like receptors, for pathogen associated structures). A special kind of activation is implemented, in B and T lymphocytes, by the immune activation of the specific receptors for antigen epitopes (B cell receptor, BCR, and T cell receptor, TCR).

The process through which the activated receptor can activate the NF-kB dimer is rather complex: it involves, in the canonical pathway, a macromolecular complex called IKK (IkB kinase) complex, comprising two catalytic kinase subunits (IKKa and IKKb) and a regulatory protein (IKKg/NEMO), and involving in multiple and complex ways the ubiquitin system. The non canonical pathway is a variation of that. Finally, a specific protein complex (CBM complex or CBM signalosome) mediates the transmission from the immune BCR or TCR to the canonical pathway. See Fig. 2:

From: NF-κB Activation in Lymphoid Malignancies: Genetics, Signaling, and Targeted Therapy – Scientific Figure on ResearchGate. Available from: https://www.researchgate.net/figure/Increased-activity-of-the-CARMA1-BCL10-MALT1-signalosome-drives-constitutive-NF-kB_fig2_324089636 [accessed 10 Jul, 2019]
Figure 3 – NF-κB Activation in Lymphoid Malignancies: Genetics, Signaling, and Targeted Therapy
available via license: Creative Commons Attribution 4.0 International

I will not go into further details about this part, but those interested can have a look at this very good paper:

TLR-4, IL-1R and TNF-R signaling to NF-kB: variations on a common theme

In particular, Figg. 1, 2, 3.

In the end, as a result of the activation process, the IkB inhibitor is degraded by the ubiquitin system, and the NK-kB dimer is free to migrate to the nucleus.

An important concept is that this is a “rapid-acting” response system, because the dimers are already present, in inactive form, in the cytoplasm, and must not be synthesized de novo: so the system is ready to respond to the activating signal.

The response.

But what is the cellular response?

Again, there are multiple and complex possible responses.

Essentially, this system is a major regulator of innate and adaptive immune responses. As such, it has a central role in the regulation of inflammation, in immunity, in autoimmune processes, and in cancer.

Moreover, the NF-kB system is rather ubiquitous, and is present and active in many different cell types. And, as we have seen, it can be activated by different stimuli, in different ways.

So, the important point is that the response to activation must be (at least):

  1. Lineage-specific
  2. Stimulus-specific

IOWs, different cells must be able to respond differently, and each cell type must respond differently to different stimuli. That gives a wide range of possible gene expression patterns at the transcription level.

The following paper is a good review of the topic:

Selectivity of the NF-κB Response

For example, IL2 is induced by NF-kB activayion in T cells, but not in B cells (lineage specific response). Moreover, specific cell types can undergo specific, and often different, cell destinies after NF-kB activation: for example, NK-kB is strongly involved in the control and regulation of T and B cell development.

From:

30 years of NF-κB: a blossoming of relevance to human pathobiology

“B and T lymphocytes induce NF-κB in adaptive immune responses through the CARD11:Bcl10:MALT1 (CBM) complex (Hayden and Ghosh, 2008). Newly expressed genes promote lymphocyte proliferation and specific immune functions including antibody production by B cells and the generation of cytokines and other anti-pathogen responses by T cells.”

And, in the same cell type, certain promoters regulated by NF-kB require additional signaling (for example,  in human dendritic cells promoters for Il6Il12b, and MCP-1 require additional p38 histone phosphorylation to be activated), while others can be activated directly (stimulus-specific response).

So, to sum up:

  1. A variety of stimuli can activate the system in different ways
  2. The system itself has its complexity (different dimers)
  3. The response can be widely different, according to the cell type where it happens, and to the type of stimuli that have activated the system, and probably according to other complex variables.
  4. The possible responses include a wide range of regulations of inflammation, of the immune system, of cell specifications or modifications, and so on.

How does it work?

So, what do we know about the working of such a system?

I will ignore, for the moment, the many complexities of the activation pathways, both canonical and non canonical, the role of cyotkines and receptors and IKK complexes, the many facets of NEMO and of the involvement of the ubiquitin system.

For simplicity, we will start with the activated system: the IkB inhibitor has been released from the inactive complex in the cytoplasm, and some form of NF-kB dimer is ready to migrate to the nucleus.

Let’s remember that the purpose of this OP is to show that the system works as a dynamic, far from equilibrium system, rather than as a “traditional” machine. And that such a way to work is an even more amazing example of design and functional complexity.

To do that; I will rely mainly on the recent paper quoted at the beginning:

Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

The paper is essentially about the NF-kB Target Selection Puzzle. IOWs, it tries to analyze what we know about the specificity of the response. How are specific patterns of transcription achieved after the activation of the system? What mechanisms allow the selection of the right genes to be transcribed (the targets) to implement the specific patterns according to cell type, context, and type of stimuli?

A “traditional” view of the system as a machine would try to establish rather fixed connections. For example, some type of dimer is connected to specific stimuli, and evokes specific gene patterns. Or some other components modulate the effect of NK-kB, generate diversification and specificity of the response.

Well, those ideas are not completely wrong. In a sense, the system does work also that way. Dimer specificity has a role. Other components have a role. In a sense, but only in a sense, the system works as though it were a traditional machine, and uses some of the mechanisms that we find in the concept of a traditional biological machine.

But that is only a tiny part of the real thing.

The real thing is that the system really works as a dynamic, far from equilibrium system, harnessing huge random/stochastic components to achieve robustness and complexity and flexibility of behavior in spite of all those non finalistic parts.

Let’s see how that happens, at least for the limited understanding we have of it. It is important to consider that this is a system that has been studied a lot, for decades, because of its central role in so many physiological and pathological contexts, and so we know many things. But still, our understanding is very limited, as you will see.

So, let’s go back to the paper. I will try to summarize as simply as possible the main concepts. Anyone who is really interested can refer to the paper itself.

Essentially, the paper analyzes three important and different aspects that contribute to the selection of targets at the genomic level by our TFs (IOWs, our NF-kB dimers, ready to migrate to the nucleus. As the title itself summarizes, they are:

  1. Abundance
  2. Affinity
  3. Binding site availability

1. Abundance

Abundance is referred here to two different variables: abundance of NF-kB Binding Sites in the genome and abundance of Nucleus-Localized NF-kB Dimers. Let’s consider them separately.

1a) Abundance of NF-kB Binding Sites in the genome:

It is well known that TFs bind specific sites in the genome. For NF-kB TFs, the following consensus kB site pattern has been found:

 5′-GGGRNWYYCC-3′

where R, W, Y, and N, respectively denote purine, adenine or thymine, pyrimidine, and any nucleotide.

That simply means that any sequence corresponding to that pattern in the genome can, in principle, bind NF-kB dimers.

So the problem is: how many such sequences do exist in the human genome?

Well, a study based on RelA has evaluated about 10^4 consensus sequences in the whole genome, but as NF-kB dimers seem to bind even incomplete consensus sites, the total number of potential binding sites could be nearer to 10^6

1b) Abundance of Nucleus-Localized NF-kB Dimers:

An estimate of the abundance of dimers in the nucleus after activation of the system is that about 1.5 × 10^5 molecules can be found, but again that is derived from studies about RelA only. Moreover, the number of molecules and type of dimer can probably vary much according to cell type.

So, the crucial variable, that is the ratio between binding sites and available dimers, and which could help undertsand the rate of sites saturation in the nucleus, remains rather undecided, and it seems very likely that it can vary a lot in different circumstances.

But there is another very interesting aspect about the concentration of dimers in the nucleus. According to some studies, NF-kB seems to generate oscillations of its nuclear content in some cell types, and those oscillation can be a way to generate specific transcription patterns:

NF-kB oscillations translate into functionally related patterns of gene expression

For example, this very recent paper :

NF-κB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration

shows at Fig. 3 the occupancy curve of binding sites at nuclear level after NF-kB activation in two different cell types.

In fibroblasts, the curve is a periodic oscillation, with a frequency that varies according to various factors, and translates into different transcription scenarios accordingly:

Gene expression dynamics scale with the period (g1) and amplitude (g2) of these oscillations, which are influenced by variables such as signal strength, duration, and receptor identity.


In macrophages, instead, the curve is rather:

a single, strong nuclear translocation event which persists for as long as the stimulus remains and tends to remain above baseline for an extended period of time.

In this case, the type of transcription will be probably regulated by the are under the curve, ratehr than by the period and amplitude of the oscialltions, as happened in fibroblasts.

Interestingly, while in previous studies it seemed that the concentration of nuclear dimers could be sufficient to saturate most or all binding sites, that has been found not to be the case in more recent studies. Again from the paper about abundance:

in fact, this lack of saturation of the system is necessary to generate stimulus- and cell-type specific gene expression profiles

Moreover, the binding itself seems to be rather short-lived:

Interestingly, it is now thought that most functional NF-kB interactions with chromatin—interactions that lead to a change in transcription—are fleeting… a subsequent study using FRAP in live cells expressing RelA-GFP showed that most RelA-DNA interactions are actually quite dynamic, with half-lives of a few seconds… Indeed, a recent study used single-molecule tracking of individual Halo-tagged RelA molecules in live cells to show that the majority (∼96%) of RelA undergoes short-lived interactions lasting on average ∼0.5 s, while just ∼4% of RelA molecules form more stable complexes with a lifetime of ∼4 s.

2. Affinity

Affinity of dimers for DNA sequences is not a clear cut matter. From the paper:

Biochemical DNA binding studies of a wide variety of 9–12 base-pair sequences have revealed that different NF-kB dimers bind far more sequences than previously thought, with different dimer species exhibiting specific but overlapping affinities for consensus and non-consensus kB site sequences.

IOWs, we have different dimers (15 different types) binding with varying affinity different DNA sequences (starting from the classical consensus sequence, but including also incomplete sequences). Remember that those sequences are rather short (the consensus sequence is 10 nucleotides long), and that there are thousands of such sequences in the genome.

Moreover, different bindings can affect transcription differently. Again, from the paper:

How might different consensus kB sites modulate the activity of the NF-kB dimers? Structure-function studies have shown that binding to different consensus kB sites can alter the conformation of the bound NF-kB dimers, thus dictating dimer function When an NF-kB dimer interacts with a DNA sequence, side chains of the amino  acids located in the DNA-binding domains of dimers contact the bases exposed in the groove of the DNA. For different consensus kB site sequences different bases are exposed in this groove, and NF-kB seems to alter its conformation to maximize interactions with the DNA and maintain high binding affinity. Changes in conformation may in turn impact NF-kB binding to co-regulators of transcription, whether these are activating or inhibitory, to specify the strength and dynamics of the transcriptional response. These findings again highlight how the huge array of kB binding site sequences must play a key role in modulating the transcription of target genes.

Quite a complex scenario, I would say!

But there is more:

Finally, as an additional layer of dimer and sequence-specific regulation, each of the subunits can be phosphorylated at multiple sites with, depending on the site, effects on nearly every step of NF-kB activation.

IOWs, the 15 dimers we have mentioned can be phosphorylated in many different ways, and that changes their binding affinities and their effects on transcription.

This section of the paper ends with a very interesting statement:

Overall, when considering the various ways in which NF-kB dimer abundances and their affinity for DNA can be modulated, it becomes clear that with these multiple cascading effects, small differences in consensus kB site sequences and small a priori differences in interaction affinities can ultimately have a large impact on the transcriptional response to NF-kB pathway activation.

Emphasis mine.

This is interesting, because in some way it seems to suggest that the whole system acts like a chaotic system, at least at some basic level. IOWs, small initial differences, maybe even random noise, can potentially affect deeply the general working of the whole systems.

Unless, of course, there is some higher, powerful level of control.

3. Availability of high affinity kB binding sequences

We have seen that there is a great abundance and variety of binding sequences for NF-kB dimers in the human genome. But, of course, those sequences are not necessarily available. Different cell types will have a different scenario of binding sites availability.

Why?

Because, as we know, the genome and chromatin are a very dynamic system, that can exist in many different states, continuosly changing in different cell types and, in the same cell type, in different conditions..

We know rather well the many levels of control that affect DNA and chromatin state. In brief, they are essentially:

  1. DNA methylation
  2. Histone modifications (methylation, acetylation, etc)
  3. Chromatin modifications
  4. Higher levels of organization, including nuclear localization and TADs (Topologically Associating Domains)

For example, from the paper:

The promoter regions of early response genes have abundant histone acetylation or trimethylation prior to stimulation [e.g., H3K27ac, (67) and H4K20me3, (66)], a chromatin state “poised” for immediate activation…  In contrast, promoters of late genes often have hypo-acetylated histones, requiring conformational changes to the chromatin to become accessible. They are therefore unable to recruit NF-kB for up to several hours after stimulation (68), due to the slow process of chromatin remodeling.

We must remember that each wave of NK-kB activation translates into the modified transcription of a lot of different genes at the genome level. It is therefore extremely important to consider what genes are available (IOWs, their promoters can be reached by the NF-kB signal) in each cell type and cell state.

The paper concludes:

Taken together, chromatin state and chromatin organization strongly influence the selection of DNA binding sites by NF-kB dimers and, most likely, the selection of the target genes that are regulated by these protein-DNA interaction events. Analyses that consider binding events in the context of three-dimensional nuclear organization and chromatin composition will be required to generate a more accurate view of the ways in which NF-kBDNA binding affects gene transcription.

This is the main scenario. But there are other components, that I have not considered in detail for the sake of brevity, for example competition between NF-kB dimers and the complex role and intervention of other co-regulators of transcription.

Does the system work?

But does the system work?

Of course it does. It is a central regulator, as we have said, of many extremely important biological processes, above all immunity. This is the system that decides how immune cells, T and B lymphocytes, have to behave, in terms of cell destiny and cell state. It is of huge relevance in all inflammatory responses, and in our defense against infections. It works, it works very well.

And what happens if it does not work properly?

Of course, like all very complex systems, errors can happen. Those interested can have a look at this recent paper:

30 years of NF-κB: a blossoming of relevance to human pathobiology

First of all, many serious genetic diseases have been linked to mutations in genes involved in the system. You can find a list in Table 1 of the above paper. Among them, for example, some forms of SCID, Severe combined immunodeficiency, one of the most severe genetic diseases of the immune system.

But, of course, a dysfunction of the NF-kB system has a very important role also in autoimmune diseases and in cancer.

Conclusions.

So, let’s try to sum up what we have seen here in the light of the original statement about biological systems that “are not machines”.

The NF-kB system is a perfect example. Even if we still understand very little of how it works, it is rather obvious that it is not a traditional machine.

A traditional machine would work differently. The signal would be transmitted from the membrane to the nucleus in the simplest possible way, without ambiguities and diversions. The Transcription Factor, once activated, would bind, at the level of the genome, very specific sites, each of them corresponding to a definite cascade of specific genes. The result would be clear cut, almost mechanical. Like a watch.

But that’s not the way things happen. There are myriads of variations, of ambiguities, of stochastic components.

The signal arrives to the membrane in multiple ways, very different one from the other: IL1, IL17, TNF, bacterial LPS, and immune activation of the B cell receptor (BCR) or the T cell receptor (TCR) are all possible signals.

The signal is translated to the NF-kB proteins in very different ways: canonical or non canonical activation, involving complex protein structures such as:

The CBM signalosome, intermediate between immune activation of BCR or TCR and canonical activation of the NF-kB. This complex is made of at least three proteins, CARD11, Bcl10 and MALT1.

The IKK complex in canonical activation: this is made of three proteins, IKK alpha, IKK beta, and NEMO. Its purpose is to phosphorylate the IkB, the inhibitor of the dimers, so that it can be ubiquinated and released from the dimer. Then the dimer can relocate to the nucleus.

Non canonical pathway: it involves the following phosphorylation cascade: NIK -> IKK alpha dimer -> Relb – p100 dimer -> Relb – p50 dimer (the final TF). It operates during the development of lymphoid organs and is responsible for the generation of B and T lymphocytes.

Different kinds of activated dimers relocate to the nucleus.

Different dimers, in varying abundance, interact with many different binding sites: complete or incomplete consensus sites, and probably others. The interaction is usually brief, and it can generate an oscillating pattern, or a more stable pattern

Completely different sets of genes are transcribed in different cell types and in different contexts, because of the interaction of NF-kB TFs with their promoters.

Many other factors and systems contribute to the final result.

The chromatin state of the cell at the moment of the NF-kB activation is essential to determine the accessibility of different binding sites, and therefore the final transcription pattern.

All these events and interactions are quick, unstable, far from equilibrium. A lot of possible random noise is involved.

In spite of that amazing complexity and potential stochastic nature of the system, reliable transcripion regulation and results are obtained in most cases. Those results are essential to immune cell differentiation, immune response, both innate and adaptive, inflammation, apoptosis, and many other crucial cellular processes.

So, let’s go back to our initial question.

Is this the working of a machine?

Of course it is! Because the results are purposeful, reasonably robust and reliable, and govern a lot of complex processes with remarkable elegance and efficiency.

But certainly, it is not a traditional machine. It is a lot more complex. It is a lot more beautiful and flexible.

It works with biological realities and not with transistors and switches. And biological realities are, by definition, far from equilibrium states, improbable forms of order that must continuously recreate themselves, fighting against the thermodynamic disorder and the intrinsic random noise that should apparently dominate any such scenario.

It is more similar to a set of extremely clever surfers who succeed in performing elegant and functional figures and motions in spite of the huge contrasting waves.

It is, from all points of view, amazing.

Now, Paley was absolutely right. No traditional machine, like a watch, could ever originate without design.

And if that is true of a watch, with its rather simple and fixed mechanisms, how much truer it must be for a system like NF-kB? Or, for that, like any cellular complex system?

Do you still have any doubts?

Added graphic: The evolutionary history, in terms of human conserved information, of the three proteins in the CBM signalosome.
On the y axis, homologies with the human protein as bits per aminoacid (bpa). On the x axis, approximate time of appearance in million of years.
The graphic shows the big information jump in vertebrates for all three protens , especially CARD11.

333 Replies to “Controlling the waves of dynamic, far from equilibrium states: the NF-kB system of transcription regulation.

  1. 1
    Eugene says:

    My biggest concern is not even about the evolution vs. ID. It is about the technology used for the machinery of life being orders of magnitude more complex than what our brains seem to be capable of understanding or analyzing. In other words, we’re already way more complex than any machinery we can realistically hope to create. And we already exist (or being simulated, doesn’t matter). What purpose do we serve then to whoever is in possession of the technology we’re made with?

  2. 2
    gpuccio says:

    Eugene:

    Thank you for the comment.

    the technology used for the machinery of life being orders of magnitude more complex than what our brains seem to be capable of understanding or analyzing.

    Yes, that’s exactly the point I was trying to make.

    What purpose do we serve then to whoever is in possession of the technology we’re made with?

    Well, that’s certainly a much bigger question, And, under many aspects, a philosophical one.

    However, we can certainly try to get some clues from the design as we see it. For example, I have said very often that the main driving purpose of biological design, far from being mere survival and fitness, as neo-darwinists believe, seems to be the desire to express ever growingly complex life and, through life, ever growingly complex functions.

    It should be rather obvious that, if the true purpose of biological beings were to achieve the highest survival and fitness, as neo-darwinists beòlieve, life should have easily stopped at prokaryotes.

  3. 3
    gpuccio says:

    To all:

    Two of the papers I quote in the OP:

    Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle
    https://www.frontiersin.org/articles/10.3389/fimmu.2019.00705/full

    and:

    NF-kB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration
    https://www.frontiersin.org/articles/10.3389/fimmu.2019.00705/full

    are really part of a research topic:

    Understanding Immunobiology Through The Specificity of NF-kB
    https://www.frontiersin.org/research-topics/7955/understanding-immunobiology-through-the-specificity-of-nf-b#articles

    including 8 very interesting and very recent papers about NF-kB, at Frontiers in Immunology.

    Here are the titles:

    Signaling Crosstalk Mechanisms That May Fine-Tune Pathogen-Responsive NF-kB

    An NF-kB Activity Calculator to Delineate Signaling Crosstalk: Type I and II Interferons Enhance NF?B via Distinct Mechanisms

    Cellular Specificity of NF-kB Function in the Nervous System

    Immune Differentiation Regulator p100 Tunes NF-kB Responses to TNF

    Techniques for Studying Decoding of Single Cell Dynamics

    NF-kB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration (quoted in the OP)

    Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle (quoted in the OP)

    Signal Distortion: How Intracellular Pathogens Alter Host Cell Fate by Modulating NF-kB Dynamics

    You can access all of them from the linked page.

    Those papers, as a whole, certainly add a lot to the ideas I have expressed in the OP.

    I will have a look at all of them, and discuss here the most interesting things.

  4. 4
    PeterA says:

    Right from the start, GP graciously warns us (curious readers) to fasten our seat belts and get ready for a thrilling ride that should be filled with very insightful but provocative explanations (perhaps a little too technical for some folks):

    the cell implements the same functions as complex machines do, and much more.

    to do that, you need much greater functional complexity than you need to realize a conventional machine.

    dynamic, far from equilibrium systems that can be as successful as a conventional machine, or more, must certainly be incredibly complex and amazing systems, systems that defy everything else that we already know and that we can conceive. They must not only implement their functional purposes, but they must do that by “harnessing” the constantly changing waves of change, of random noise, of improbability. 

    Please, note that almost a year ago GP wrote this excellent article:
    Transcription Regulation: A Miracle Of Engineering
    (visited 3,545 times and commented 334 times)

    following another very interesting discussion started by PaV a month earlier:
    Chromatin Topology: The New (And Latest) Functional Complexity
    (visited 3,338 times and commented 241 times)

    Before this discussion goes further, I want to share my delight in seeing this excellent article here today and express my deep gratitude to GP for taking time to write it and for leading the discussion that I expect this fascinating (often mind boggling) topic should provoke.

  5. 5
    kairosfocus says:

    Another GP thought-treat! Yay!!!! KF

  6. 6
    jawa says:

    I second KF @5.
    It’s a pleasure to see a new OP by GP.
    However, as usual, it’s so dense that it requires some chewing before it can be digested, at least partially. 🙂
    Perhaps this time I see some loud anti-ID folks like the professors from Toronto and Kentucky will dare to present some valid arguments? However, I won’t hold my breath. 🙂

  7. 7
    OLV says:

    I agree with PeterA @ 4 and join Jawa @6 to second KF@5.

    However, before embarking in a careful reading of what GP has written, let me publicly confess here that I still don’t understand certain basic things associated with transcription:
    1. are there many DNA segments that can get transcribed by the RNA polymerase to a pre-mRNA that later can be spliced to form the mRNA that goes to translation?
    2. what mechanisms determine which of those multiple potential segments is transcribed at a given moment? Don’t they all have starting and ending points? Then why will the RNA-polymerase transcribe one segment and not another? Are the starting marks different for every DNA segment?
    3. is this an epigenetic issue or something else?
    Perhaps these (most probably dumb) questions have been answered many times in the literature I have read, but I still don’t quite get it. I would fail to answer those questions if I had to pass a test on this subject right now.
    Any help with this?
    Thanks.
    PS. the papers GP has linked in this OP are very interesting.

  8. 8
    gpuccio says:

    PeterA:

    Thank you. 🙂

    Indeed, the topic is fascinating. We really need to go beyond our conventional ideas about biology, armed by the powerful weapons of design inference and functional complexity.

  9. 9
    gpuccio says:

    KF:

    Thank you! 🙂

    Appreciate your enthusiasm! 🙂

  10. 10
    gpuccio says:

    Jawa:

    Thank you! 🙂

    I really hope there will be some interesting discussion.

  11. 11
    gpuccio says:

    OLV:

    Thank you! 🙂

    As you ask questions, here arer my answers:

    1. Essentially, all protein coding genes, about 20000 in the human genome.

    2. It requires the binding of general TFs at the promoter and the formation of the pre-initiation complex (which is the same for all genes), plus the binding of specific TFs at one or more enhancer sites, with specific modifications of the chromatin structures. At least.

    3. Yes. It is an epigenetic process.

  12. 12
    bornagain77 says:

    Did you see this recent paper Gp?, particularly this, “Even between closely related species there’s a non-negligible portion of TFs that are likely to bind new sequences,”?

    Dozens Of Genes Once Thought Widespread Are Unique To Humans – May 27, 2019
    Excerpt: Researchers at the Donnelly Centre in Toronto have found that dozens of genes, previously thought to have similar roles across different organisms, are in fact unique to humans and could help explain how our species came to exist. These genes code for a class of proteins known as transcription factors, or TFs, which control gene activity. TFs recognize specific snippets of the DNA code called motifs, and use them as landing sites to bind the DNA and turn genes on or off.,,,
    The findings reveal that some sub-classes of TFs are much more functionally diverse than previously thought.
    “Even between closely related species there’s a non-negligible portion of TFs that are likely to bind new sequences,” says Sam Lambert, former graduate student in Hughes’ lab who did most of the work on the paper and has since moved to the University of Cambridge for a postdoctoral stint.
    “This means they are likely to have novel functions by regulating different genes, which may be important for species differences,” he says.
    https://uncommondescent.com/human-evolution/dozens-of-genes-once-thought-widespread-are-unique-to-humans/

    paper

    Excerpt: Similarity regression inherently quantifies TF motif evolution, and shows that previous claims of near-complete conservation of motifs between human and Drosophila are inflated, with nearly half of the motifs in each species absent from the other, largely due to extensive divergence in C2H2 zinc finger proteins.
    https://www.nature.com/articles/s41588-019-0411-1

  13. 13
    gpuccio says:

    To all:

    Well, the first paper in the “reasearch topic” I mentioned at #3 is:
    Signaling Crosstalk Mechanisms That May Fine-Tune Pathogen-Responsive NF?B

    It immediately brings us back to an old and recurring concept:

    crosstalk

    Now, if there is one concept that screams design, that is certainly “crosstalk”.

    Because, to have crosstalk, you need at least two intelligent systems, each of them with its own “language”, interacting in intelligent ways. Or, of course, at least two intelligent people! 🙂

    This paper is about one specific aspect of the NF-kB system: transcription regulation in response to non specific stimuli from infecting agents, the so called innate immune response.

    You may remember from the OP that the specific receptors for bacterial or viral components (for example bacterial lipopolysaccharide , LPS) are called Toll like receptors (TLRs), and that their activation converges, through its own complex pathways, into the canonical pathway of activation of the NF-kB system.

    This is a generic way to respond to infections, and is called “innate immune response”, to distinguish it from the adaptive immune response, where T and B lymphocytes resognize specific patterns (epitopes) in specific antigens and react to them by a complex memory and amplification process. As we know, the NF-kB system has a very central role in adaptive immunity too, but it is completely different.

    But let’s go back to innate immunity. The response, in this case, is an inflammatory response. This response, of course, is more generic than the refined adaptive immune response, involving antibodies, killer cells and so on. However, even is simpler, the quality and quantity of the inflammatory response must be strictly fine tuned, because otherwise it becomes really dangerous for the tissues.

    This paragraph sums up the main concepts in the paper:

    To ensure effective host defense against pathogens and to maintain tissue integrity, immune cells must integrate multiple signals to produce appropriate responses (14). Cells of the innate immune system are equipped with pattern recognition-receptors (PRRs) that detect pathogen-derived molecules, such as lipopolysaccharides and dsRNA (3). Once activated, PRRs initiate series of intracellular biochemical events that converge on transcription factors that regulate powerful inflammatory gene expression programs (15). To tune inflammatory responses, pathways that do not trigger inflammatory responses themselves may modulate signal transduction from PRRs to transcription factors through crosstalk mechanisms (Figure 1). Crosstalk allows cells to shape the inflammatory response to the context of their microenvironment and history (16). Crosstalk between two signaling pathways may emerge due shared signaling components, direct interactions between pathway-specific components, and regulation of the expression level of a pathway-specific component by the other pathway (1, 17). Since toll-like receptors (TLRs) are the best characterized PRRs, they provide the most salient examples of crosstalk at the receptor module. Key determinants of tissue microenvironments are type I and II interferons (IFNs), which do not activate NF?B, but regulate NF?B-dependent gene expression (18–21). As such, this review focuses on the cross-regulation of the TLR-NF?B signaling axis by type I and II IFNs.

    So, a few interesting points:

    a) TLRs, already a rather complex class of receptors, are part of a wider class of receptors, the pattern recognition-receptors (PRRs). Complexity never stops!

    b) The interferon system is another, different system implied in innate immunity, especially in viral infections. We all know its importance. Interferons are a complex set of cytokines with its own complex set of receptors and responses.

    c) Howerver, the interferon system does not directly activate the NF-kB system. In a sense, they are two “parallel” signaling systems, both implied in innate immune responses.

    d) But, as the paper well outlines, there is a lot of “crosstalk” between the two systems. One interferes with the other at multiple levels. And that crosstalk is very important for a strict fine tuning of the innate immune response and of imflammatory processes.

    Interesting, isn’t it?

    I quote here the conclusions:

    Concluding Remarks
    Maintaining a delicate balance between effective host defense and deleterious inflammatory responses requires precise control of NF?B signaling (111). Multiple regulatory circuits have evolved to fine-tune NF?B-mediated inflammation through context-specific crosstalk (112). In this work, we have highlighted specific components of the NF?B signaling pathway for which crosstalk regulation is well-established. Despite decades of research, our current understanding of NF?B signaling remains insufficient to yield effective pharmacological targets (111, 113). Effective and specific pharmacological modulation of NF?B activity requires detailed, quantitative understanding of NF?B signaling dynamics (57). Furthermore, achieving cell-type and context-specific modulation of NF?B would be a panacea for many autoimmune and infectious diseases, as well as malignancies (112–114).

    To dissect the dynamic regulation of NF?B signaling, quantitative approaches with single-cell resolution are required (115). By measuring the full distribution of signaling dynamics and gene expression in single cells, rather than simple averages, one can decipher cell-intrinsic properties from tissue-intrinsic properties (116–118). Such single-cell analyses may reveal strategies for targeting pathological cell populations with high specificity, which can mitigate adverse effects of pharmacological therapy (57, 113). Furthermore, with the aid of mathematical and computational modeling, one can conduct experiments in silico that may be prohibitive in vitro or ex vivo (57, 119, 120).

    Finally, cross-regulatory pathways may fine-tune NF?B activity in a gene-specific manner. Many studies have identified the molecular components of gene-regulatory networks (GRNs) that control NF?B-dependent gene expression (15, 121). The regulatory mechanisms that define the topology of these GRNs include chromatin remodeling, transcription initiation and elongation, and post-transcriptional processing (15). They allow for combinatorial control by multiple factors and pathways, as well as cross-regulation (15). Further work will be required to delineate them in various physiological contexts.

    As usual, emphasis is mine.

    Please note the “have evolved” at the beginning, practically used by default instead of a simple “do exist” or “can be observed”. 🙂

  14. 14
    gpuccio says:

    Bornagain77:

    Yes, I have looked at that paper. Interesting.

    Indeed, divergence in TF sequences and motifs is certainly one of the main tools of specific transcription control in different organisms.

    A lot of superficial ideas about TFs is probably due to the rather strong conservation of known DNA binding domanis (DBDs). However, DBDs are only part of the story. The most interesting part of TF sequences is certainly to be found in the less conserved sequences and domains, even in intrinsically disordered sequences.

  15. 15
    gpuccio says:

    To all:

    This is a more general paper about oscillations in TF nuclear occupancy as a way to regulate transcription:

    Transcription factor oscillations in neural stem cells: Implications for accurate control of gene expression

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5345753/

    The abstract:

    Naturally occurring oscillations in glucocorticoids induce a cyclic activation of the glucocorticoid receptor (GR), a well-characterized ligand-activated transcription factor. These cycles of GR activation/deactivation result in rapid GR exchange at genomic response elements and GR recycling through the chaperone machinery, ultimately generating pulses of GR-mediated transcriptional activity of target genes. In a recent article we have discussed the implications of circadian and high-frequency (ultradian) glucocorticoid oscillations for the dynamic control of gene expression in hippocampal neural stem/progenitor cells (NSPCs) (Fitzsimons et al., Front. Neuroendocrinol., 2016). Interestingly, this oscillatory transcriptional activity is common to other transcription factors, many of which regulate key biological functions in NSPCs, such as NF-kB, p53, Wnt and Notch. Here, we discuss the oscillatory behavior of these transcription factors, their role in a biologically accurate target regulation and the potential importance for a dynamic control of transcription activity and gene expression in NSPCs.

    And here is the part about NF-kB:

    The NF-kB pathway is composed of a group of transcription factors that bind to form homo- or hetero-dimers. Once formed, these protein complexes control several cellular functions such as the response to stress and the regulation of growth, cell cycle, survival, apoptosis and differentiation in NSPCs.14-16 Oscillations in NF-kB were first observed in embryonic fibroblasts, this observation suggested that temporal control of NF-kB activation is coordinated by the sequential degradation and synthesis of inhibitor kappa B (IkB) proteins.3

    More recently, oscillations in the relative nuclear/cytosolic concentration of NF-kB transcription factors have been observed in single cells in vivo, indicating this may be an additional regulatory mechanism to control NF-kB-dependent transcriptional activity. Importantly, the frequency and amplitude of these oscillations changed in a cell-type dependent fashion and differentially affected the dynamics of gene expression,5 indicating that NF-kB transcription factors may use changes in the frequency and amplitude of their oscillatory dynamics to regulate the transcription of target genes.1,17 Thus, the NF-kB pathway provides a well-characterized example of how oscillatory transcription factor activity may encode additional, biologically relevant, information for an accurate control of gene expression.

    So, these “waves” of nuclear occupancy by TFs, regulating transcription according to their frequency/period and amplitude, seem to be a pattern that is not isolated at all. Maybe more important and common than we can at present imagine.

  16. 16
    Eugen says:

    We have classical, celestial and quantum mechanics but this article describes the process of what we should call chemical mechanics. Why not? 🙂

  17. 17
    OLV says:

    GP @11:

    (Regarding my questions @7)

    “It requires the binding of general TFs at the promoter and the formation of the pre-initiation complex (which is the same for all genes), plus the binding of specific TFs at one or more enhancer sites, with specific modifications of the chromatin structures. At least.”

    thanks for the explanation.

    Why “at least”? Could there be more?

    With the information you provided, I found this:

    Introduction to the Thematic Minireview Series: Chromatin and transcription

  18. 18
    gpuccio says:

    Eugen at #16:

    We have classical, celestial and quantum mechanics but this article describes the process of what we should call chemical mechanics. Why not?

    Yes, why not?

    Chemical mechanics? That is a brilliant way to put it! 🙂

  19. 19
    gpuccio says:

    OLV at #17:

    “Why “at least”? Could there be more?”

    Yes. There can always be more, in biology. Indeed, strangely, there always is more. 🙂

    By the way, nice mini-review about chromatin and transctiption you found! I will certainly read it with great attention.

  20. 20
    gpuccio says:

    To all:

    We have said that NF-kB is an ubiquitously expressed transcription factor. It really is!

    So, while its more understood functions are mainly related to the immune system and inflammation, it does implement competely different functions in other types of cells.

    This very interesting paper, which is part of the research topic quoted at #3, is about the increasing evidennces of the important role of the NK-kB system in the Central Nervous System:

    Cellular Specificity of NF-?B Function in the Nervous System

    https://www.frontiersin.org/articles/10.3389/fimmu.2019.01043/full

    And, again, it focuses on the cellular specificity of the NF-kB response.

    Here is the introduction:

    Nuclear Factor Kappa B (NF-kB) is a ubiquitously expressed transcription factor with key functions in a wide array of biological systems. While the role of NF-kB in processes, such as host immunity and oncogenesis has been more clearly defined, an understanding of the basic functions of NF-kB in the nervous system has lagged behind. The vast cell-type heterogeneity within the central nervous system (CNS) and the interplay between cell-type specific roles of NF-kB contributes to the complexity of understanding NF-kB functions in the brain. In this review, we will focus on the emerging understanding of cell-autonomous regulation of NF-?B signaling as well as the non-cell-autonomous functional impacts of NF-?B activation in the mammalian nervous system. We will focus on recent work which is unlocking the pleiotropic roles of NF-kB in neurons and glial cells (including astrocytes and microglia). Normal physiology as well as disorders of the CNS in which NF-kB signaling has been implicated will be discussed with reference to the lens of cell-type specific responses.

    Table 1 in the paper lists the following functions for NF-kB in neurons:

    -Synaptic plasticity
    -Learning and memory
    -Synapse to nuclear communication
    -Developmental growth and survival in response to trophic cues

    And, for glia:

    -Immune response
    -Injury response
    -Glutamate clearance
    -Central control of metabolism

    As can be seen, while the roles in glia cells are more similar to what we would expect from the more common roles in the immune system, the roles in neurons are much more specific and refined.

    The Table also mentions the following:

    “The pleiotropic functions of the NF-kB signaling pathway coupled with the cellular diversity of the nervous system mean that this table reflects generalizations, while more specific details are in the text of this review.”

    So, while I certainly invite all interested to look at the “more specific details”, I am really left with the strange feeling that, for the same reasons mentioned there (pleiotropic functions, cellular diversity, and probably many other things), everything we know about the NF-kB system, and probably all similar biological systems, really “reflects generalizations”.

    And that should really give us a deep sense of awe.

  21. 21
    gpuccio says:

    To all:

    This paper deals in more detail with the role of NF-kB system in synaptic plasticity, memory and learning:

    Transcriptional Control of Synaptic Plasticity by Transcription Factor NF-kB

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4736603/

    Abstract
    Activation of nuclear factor kappa B (NF-kB) transcription factors is required for the induction of synaptic plasticity and memory formation. All components of this signaling pathway are localized at synapses, and transcriptionally active NF-kB dimers move to the nucleus to translate synaptic signals into altered gene expression. Neuron-specific inhibition results in altered connectivity of excitatory and inhibitory synapses and functionally in selective learning deficits. Recent research on transgenic mice with impaired or hyperactivated NF-kB gave important insights into plasticity-related target gene expression that is regulated by NF-kB. In this minireview, we update the available data on the role of this transcription factor for learning and memory formation and comment on cross-sectional activation of NF-kB in the aged and diseased brain that may directly or indirectly affect kB-dependent transcription of synaptic genes.

    1. Introduction
    Acquisition and consolidation of new information by neuronal networks often referred to as learning and memory formation depend on the instant alterations of electrophysiological parameters of synaptic connections (long-term potentiation, long-term depression), on the generation of new neurons (neuroneogenesis), on the outgrowth of axons and dendrites (neuritogenesis), and on the formation/remodulation of dendritic spines (synaptogenesis). The transmission of active synapses becomes potentiated by additional opening of calcium channels and incorporation of preexisting channel proteins, that is, during the induction of long-term potentiation. In contrast, long-term structural reorganization of the neuronal network depends on the induction of specific gene expression programs [1]. The transcription factor NF-kB has been shown to be involved in all of the aforementioned processes of learning-associated neuronal plasticity, that is, long-term potentiation, neuroneogenesis, neuritogenesis, and synaptogenesis (for review, see [2]).

    A few concepts:

    a) All NF-kB Pathway Proteins Are Present at the Synapse.

    b) NF-kB Becomes Activated at Active Synapses

    c) NF-kB Induces Expression of Target Genes for Synaptic Plasticity

    d) Activation of NF-kB Is Required for Learning and Memory Formation

  22. 22
    jawa says:

    Can’t understand why the anti-ID folks allow GP to discredit neo-Darwinism so boldly in his OPs and commentaries. Are there objectors left out there? Have they missed GP’s arguments?
    Where are professors Larry Moran, Art Hunter, and other distinguished academic personalities that openly oppose ID?
    Did they give up? Do they lack solid arguments to debate GP?
    Are they afraid of experiencing public embarrassment?

  23. 23
    jawa says:

    Sorry, someone called my attention to my misspelling of UKY Professor Art Hunt’s name in my previous post. Mea culpa. 🙁

    I was referring to this distinguished professor who has posted interesting comments here before:

    https://pss.ca.uky.edu/person/arthur-hunt
    http://www.uky.edu/~aghunt00/agh.html

    It would be interesting to have him back here debating GP.

  24. 24
    Silver Asiatic says:

    jawa

    Can’t understand why the anti-ID folks allow GP to discredit neo-Darwinism so boldly in his OPs and commentaries.

    Discrediting Neo-Darwinism is one phase that we go through. Probably there is enough dissent within evolutionary science that they will back off from the more extreme proclamations of the greatness of Darwin. Mainstream science mags are openly saying things like “it overturns Darwinian ideas”. They don’t mind the idea of revolution. They’re building a defense for the next phase. It won’t be Neo-Darwinism but a collection of ad hoc observations and speculations. They explain that things happen. Self-organizing chemical determination caused it. They don’t need mutations or selection. Any mindless actions will do. It’s not about Darwin, and it’s not even about evolution. It’s not even about science. It’s all just a program to explain the world according to a pre-existing belief system. Even materialism is expendable when it is shown to be ridiculous. They will sell-out and jettison all previous claims and everything they use and just grab another (that’s how science works, we hear) – it’s all about protecting their inner belief. That’s the one thing that drives all of it. We know what that inner belief is, and ID is an attempt to chip away at it from the edges – indirectly and carefully, using their own terminology and doctrines. We’ve done well.
    But defeating Darwin is only a small part. Behe has been doing it for years and they’ll eventually accept his findings. The evolution story line will just adjust itself.
    Proving that there is actually Intelligent Design is much more difficult and without a knock-down argument, our best efforts remain ignored.

  25. 25
    gpuccio says:

    Jawa at #22:

    Frankly, I don’t think they are interested in my arguments. They are probably too bad!

  26. 26
    gpuccio says:

    Jawa and others:

    Or maybe they don’t believe that there is anything in my arguments tha really favours design. Some have made that objection in the past, I believe. good arguments, but what have they to do with design?

    Well. I believe that they have a lot to do with design.

    What do you think? Do my arguments in this OP, about harnessing stochastic change to get strict funtion, favour the design hypothesis? Or are they perfectly compatible with a neo-darwinian view of reality?

    Just to know…

  27. 27
    gpuccio says:

    Jawa at #23:

    Of course Arthur Hunt would be very welcome here. Indeed, any competent defender of the neo-darwinian paradigm would be very welcome here.

  28. 28
    gpuccio says:

    Silver Asiatic at #24:

    I think that the amazing complexity of newtork functional configurations in these complex regulation systems is direct evidence of intelligence and purpose. It is, of course, also an obvious falsification of the neo-darwinist paradigm, which cannot even start to try to explain that kind of facts.

    You are right that post-post-neo-darwinists are trying as well as they can to build new and more fashionable religions, such as self-organization, emerging properties, magical stochastic systems, and any other intangible, imaginary principle that is supposed to help.

    But believe me, that will not do. That simply does not work.

    When really pressured, they always go back to the old good fairy tale: RV + NS. In the end, it’s the only lie that retains some superficial credibility. The only game in town.

    Except, of course, design. 🙂

  29. 29
    gpuccio says:

    To all:

    This is interesting:

    Single-molecule dynamics and genome-wide transcriptomics reveal that NF-kB (p65)-DNA binding times can be decoupled from transcriptional activation

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6353211/

    Abstract
    Transcription factors (TFs) regulate gene expression in both prokaryotes and eukaryotes by recognizing and binding to specific DNA promoter sequences. In higher eukaryotes, it remains unclear how the duration of TF binding to DNA relates to downstream transcriptional output. Here, we address this question for the transcriptional activator NF-?B (p65), by live-cell single molecule imaging of TF-DNA binding kinetics and genome-wide quantification of p65-mediated transcription. We used mutants of p65, perturbing either the DNA binding domain (DBD) or the protein-protein transactivation domain (TAD). We found that p65-DNA binding time was predominantly determined by its DBD and directly correlated with its transcriptional output as long as the TAD is intact. Surprisingly, mutation or deletion of the TAD did not modify p65-DNA binding stability, suggesting that the p65 TAD generally contributes neither to the assembly of an “enhanceosome,” nor to the active removal of p65 from putative specific binding sites. However, TAD removal did reduce p65-mediated transcriptional activation, indicating that protein-protein interactions act to translate the long-lived p65-DNA binding into productive transcription.

    Now, let’s try to understand what this means.

    First of all, just to avoid confusion, p65 is just another name for RelA, the most common among the 5 proteins that contribute to NF-kB dimers. The paper here studied the behavour of the p65(RelA)-p50 dimer, with special focus on the RelA interaction with DNA.

    Now, we know that RelA, like all TFs, has a DNA binding domain (DBD) which binds specific DNA sites. We also know that the DBD is usually strongly conserved, and is supposed to be the most functional part in the TF.

    The paper here shows, in brief, that the DBD is really responsible for the DNA binding and for its stability (the duration of the binding), and the duration is connected to transcription. However, it is not the DBD itself that works on transcription, but rather the two protein-protein transactivation domains (TADs). While DNA binding is necessary to activate transcription, mere DNA binding does not work: mutations in the TADs will reduce transcription, even if the DNA binding remains stable. IOWs, it’s the TADs that really affect transcription, even if the DBD is necessary.

    OK, why is that interesting?

    Let’s see. The DBD is located, in the RelA molecule, in the first 300 AAs (the human protein is 551 AAs long). The two TADs are located, instead, in the last part of the molecule, more or less the last 100 – 200 AAs.

    So, I have blasted the human protein against our old friends, cartilaginous fishes.

    Is the protein conserved across our usual 400+ million years?

    The answer is the same as for most TFs: moderately so. In Rhincodon typus, we have about 404 bits of homology, less than 1 bit per aminoacid (bpa). Enough, but not too much.

    But is it true that the DBD is highly conserved?

    It certainly is. The 404 bits of homology, indeed, are completely contained in the first 300 AAs or so. IOWs, the homology is practically completely due to the DBD.

    So yes, the DBD is highly conserved.

    The rest of the sequence, not at all.

    In particular, the last 100 – 200 AAs at the C terminal, where the TAD domains are localized, show almost no homology bewteen humans and cartilaginous fishes.

    But… we know that those TAD domains are essential for the function. It’s them that really activate the transcription cascade. We can have no doubt about that!

    And so?

    So, this is a clear example of a concept that I have tried to defend many times here.

    There is function which remains the same through natural history. Therefore, the corresponding sequences are highly conserved.

    And there is function which changes. Which must change from species to species. Which is more specific to the individual species.

    That second type of function is not highly conserved at sequence level. Not because it is less essential, but because it is different in different species, and therefore has to change to remain functional.

    So, in RelA we can distinguish (at least) two different functions:

    a) The DNA binding: this function is implemented by the DBD (firts 300 AAs). It happens very much in the same way in humans and cartilaginous fishes, and thereofre the corresponding sequences remain highly homologous after 400+ years of evolutionary separation.

    b) The protein-protein interaction which really actovates the specific transcription: this function is implemented by the TADs (last 200 AAs). It is completely different in cartilaginous fishes and humans, because probably different genes are activated by the same signal, and therefore the corresponding sequence is not conserved.

    But it is highly functional just the same. In different ways, in the two different species.

    IOWs, my measure of functional information based on conserved homologies through long evolutionary times does measure functional information, but usually underestimates it. For example, in this case the value of 404 bits would measure only the conserved function in the DBD, but it would miss completely the undeniable functional information in the TAD domains, because that information, while certainly present, is not conserved among species.

    This is, IMO, a very important point.

  30. 30
    Silver Asiatic says:

    GP
    Agreed. You’ve done a great job to expose the reality of those systems. The functional relationships are indication of purpose and design, yes. I think what happens also is that evolutionists find some safety in the complexity that you reveal. They assume that nobody will actually go that far “down into the weeds” so they can always claim there’s something going on that is far too sophisticated for the average IDist to understand. So, they hide in the details.
    You’ve called their bluff and show what is really going on, and it is inexplicable from their mechanisms. They look for an escape but there is none. I agree also that it’s not merely a defeat of RM + NS that is indicated, but evidence of design in the actual operation of complex systems.
    Another tactic we see is that an extremely minor point is attacked and they attempt to show that it could have resulted from a mutation or HGT or drift. If they can make it half-way plausible then their entire claim will stand unrefuted, supposedly.
    It’s a game of hide-and-seek, whack-a-mole. We have to deal with 50 years of story-telling that just continued to build one assumption upon another, without any evidence, and having gained unquestioning support from academia simply on the idea that “evolution is right and every educated and intelligent person believes in it”. But even in papers citing evolution they never (or rarely) give the probabilistic outlooks on how it could have happened.

  31. 31
    Silver Asiatic says:

    GP

    What do you think? Do my arguments in this OP, about harnessing stochastic change to get strict funtion, favour the design hypothesis? Or are they perfectly compatible with a neo-darwinian view of reality?

    I think you did a great job, but just a thought …

    You responded to the notion that supported our view – the researcher says that the cell is not merely engineering but is more dynamic. So, we support that and you showed that the cell is far more than a machine.

    However, in supporting that researcher’s view, has the discussion changed?

    In this case, the researcher is actually saying that deterministic processes cannot explain these cellular functions. He says it’s all about self-organization, etc.

    Now, what you have done is amplified his statement very wonderfully. However …
    What remains open are a few things:
    1. Why didn’t the researcher, stating what you (and we) would and did – just conclude Design?
    2. The researcher is attacking Darwinism (subtly) accepting some of it:

    This familiar understanding grounds the conviction that a cell’s organization can be explained reductionistically, as well as the idea that its molecular pathways can be construed as deterministic circuits. The machine conception of the cell owes a great deal of its success to the methods traditionally used in molecular biology. However, the recent introduction of novel experimental techniques capable of tracking individual molecules within cells in real time is leading to the rapid accumulation of data that are inconsistent with an engineering view of the cell.

    … so, hasn’t he already conceded the game to us on that point?

    Could we now show how self-organization is not a strong enough answer for this type of system?

    I believe we could simply use Nicholson’s paper to discredit Darwinism (as he does himself), and our amplification of his work does “favor a design view”. But we don’t have enough data on how he (and others) believe self-organization really works as a substitute for Darwinian mechanisms, and that weakens support for Design in some ways.

  32. 32
    gpuccio says:

    Bornagain77 at #12:

    I believe that my comment at #29 is strictly connected to you observations. It also expands, with a real example, the simple ideas I had already expressed at #14.

    So, you could like to have a look at it! 🙂

  33. 33
    gpuccio says:

    Silver Asiatic at #30:

    I absolutely agree with what you say here! 🙂

  34. 34
    gpuccio says:

    Silver Asiatic at #31:

    Very good points.

    Yes, my argument is exactly that as the cell is more than a machine, and yet it implements the same type of functions as traditional machines do, only with much higher flexibility and complexity, it does require a lot more of intelligent design and engineering to be able to work.

    So, it is absolutely true that the researcher in that paper has made a greater point for Intelligent Design.

    But, of course, he (or they) will never admit such a thing! And we know very well why.

    So, the call to “self-organization”, or to “stochastic systems”.

    Of course, that’s simply mystification. And not even a good one.

    I will comment on the famous concept of “self-organization” in my next post.

  35. 35
    bornagain77 says:

    Per Gp 32, it is not enough, per falsification, to find examples that support your theory. In other words, I can find plenty of counterexamples.

  36. 36
    gpuccio says:

    Bornagain77 at #35:

    I am not sure that I understand what you mean.

    My theory? Falsification? Counterexamples?

    At #12 you quote a paper that says:

    “Similarity regression inherently quantifies TF motif evolution, and shows that previous claims of near-complete conservation of motifs between human and Drosophila are inflated, with nearly half of the motifs in each species absent from the other, largely due to extensive divergence in C2H2 zinc finger proteins.”

    OK?

    At #14 I agree with the paper, and add a comment:

    Indeed, divergence in TF sequences and motifs is certainly one of the main tools of specific transcription control in different organisms.

    A lot of superficial ideas about TFs is probably due to the rather strong conservation of known DNA binding domanis (DBDs). However, DBDs are only part of the story. The most interesting part of TF sequences is certainly to be found in the less conserved sequences and domains, even in intrinsically disordered sequences.”

    OK?

    At #29 I reference a paper about RelA, one of the TFs discussed in this OP, that shows a clear example of what I said at #14: homology of the DBD and divergence of the functional TADs between humans and cartilaginous fishes. Which is exactly what was stated in the paper you quoted.

    What is the problem? What am I missing?

  37. 37
    bornagain77 says:

    “What is the problem? What am I missing?”

    Could be me missing something. I thought you might, with your emphasis on conservation, be pushing for CD again.

  38. 38
    gpuccio says:

    Silver Asiatic and all:

    OK, a few words about the myth of “self organization”.

    You say:

    “But we don’t have enough data on how he (and others) believe self-organization really works as a substitute for Darwinian mechanisms, and that weakens support for Design in some ways.”

    It is perfectly true that we “don’t have enough data” about that. We don’t have them because there is none: “self organization” simply does not work as a substitute for Darwinian mechanisms. IOWs, it explain absolutely nothing about functional complexity (not that Darwinian mechanisms do, but at least they try).

    Let’s see. I would say that there is a correct concept of self-organization, and a completely mythological expansion of it to realities that have nothing to do with it.

    The correct concept of self-organization comes from physics and chemistry, essentially. It is the science behind systems that present some unexpected “order”deriving from the interaction of random components and physical laws.

    Examples:

    a) Physics: Heat applied evenly to the bottom of a tray filled with a thin sheet of viscous oil transforms the smooth surface of the oil into an array of hexagonal cells of moving fluid called Bénard convection cells

    b) Chenistry: A Belousov–Zhabotinsky reaction, or BZ reaction, is a nonlinear chemical oscillator, including bromine and an acid. These reactions are far from equilibrium and remain so for a significant length of time and evolve chaotically, being characterized by a noise-induced order.

    And so on.

    Now, the concept of self-organization has been artificially expanded to almost everything, including biology. But the phemomenon is essentially derived from this type of physical models.

    In general, in these examples, some stochastic system tends to achieve some more or less ordered stabilization towards what is called an attractor.

    Now, to make things simple, I will just mention a few important points that show how the application of those principles to biology is completely wrong.

    1) In all those well known physical systems, the system obeys the laws of physics, and the pattern that “emerges” can very well be explained as an interaction between those laws and some random component. Snowflakes are another example.

    2) The property we observe in these systems is some form of order. That is very important. It is the most important reason why self-organization has nothing to do with functional complexity.

    3) Functional complexity is the number of specific bits that are necessary to implement a function. It has nothing to do with a generic “order”. Take a protein that has an enzymatic activity, for example, and compare it to a snowflake. The snowflake has order, but no complex function. Its order can be explained by simple laws, and the differences between snowflakes can be explained by random differences in the conditions of the system. Instead, the function of a protein strictly depends on the sequence of AAs. It has nothing to do with random components, and it follows a very specific “recipe” coming from outside the system: the specific sequence in the protein, which in turn depends on the specific sequence of nucleotides in the protein coding gene. There is no way that such a specific sequence can be the result of “self-organization”. To believe that it is the result of Natural Selection is foolish, but at least it has some superficial rationale. But to believe that it can be the result of self-organization, of physical and chemical laws acting on random components, is total folly.

    4) The simple truth is that the sequence of AAs generates function according to chemical rules, but to find what sequence among all possible sequences will have the function requires deep understanding of the rules of chemistry, and extreme computational power. We still are not able to build functional proteins by a top down process. Bottom up processes are more efficient, but still require a lot of knowledge, computation power, and usually strictly guided artificial selection. Even so, we are completely unable to engineer anything like ATP synthase, as I have discussed in detail many times. Nor could ever RV + NS do that.

    But, certainly, no amount of “self-organization” in the whole reality could even begin to do such a thing.

    5) Complex networks like the one I have discussed here certainly elude our understanding in many ways. But one thing is certain: they do require tons of functional information at the level of the sequences in proteins and other parts of the genome to wortk correctly. As we have seen in the OP, mutations in different parts of the system are connected to extremely serious diseases. Of course, no self-organization of any kind can ever correct those small errors in digital functional information.

    6) The function of a protein is not an “emerging” quality of the protein any more than the function of a watch is an emerging quality of the gears. The function of a protein depends on a very precise correspondence between the digital sequence of AAs and the laws of biochemistry, which determines the folding and the final structure and status (or statuses) of the protein. This is information. The same information that makes the code for Excel a functional reality. Do we see codes for software emerging from self-organization? We should maybe inform video game programmers of that, they could spare a lot of work and time.

    In the end, all these debates about self-organizarion, emerging properties and snowflakes have nothing to do with functional information. The only objects that exhibit functional information beyond 500 bits are, still, human artifacts and biological objects. Nothing else. Not snowflakes, not viscous oil, not the game of life. Only human artifacts and biological objects.

    Those are the only objects in the whole known universe that exhibit thousands, millions, maybe billions of bits strictly aimed at implementing complex and obvious functions. The only existing instances of complex functional information.

  39. 39
    gpuccio says:

    Bornagain77;

    No. As you know, I absolutely believe in CD, but that is not the issue here. Homology is homology, and divergence is divergence, whatever the model we use to explain them.

    I just wanted to show an example of a protein (RelA), indeed a TF, where both homology (in the DBD) and divergence (in the TADs) are certainly linked to function.

    When I want to “push” for CD, I know how to do that.

  40. 40
    bornagain77 says:

    “I absolutely believe in CD”

    Interesting claim of absolute certainty from you given the discontinuous nature of the fossil record, the discontinuous nature of the genetic evidence, and the fact that no one has ever changed the basic body plan of an organism into another body plan.

    Perhaps, given your poverty of empirical warrant, a bit more modest measure of certainty would be wise on your part?

    For instance, it seems you are holding somewhat to a reductive materialistic framework in your ‘absolute’ certainty about CD, and yet, the failure of reductive materialism to be able to explain the basic form and/or body plan of any particular organism occurs at a very low level. Much lower than DNA itself.
    In the following article entitled ‘Quantum physics problem proved unsolvable: Gödel and Turing enter quantum physics’, which studied the derivation of macroscopic properties from a complete microscopic description, the researchers remark that even a perfect and complete description of the microscopic properties of a material is not enough to predict its macroscopic behaviour.,,, The researchers further commented that their findings challenge the reductionists’ point of view, as the insurmountable difficulty lies precisely in the derivation of macroscopic properties from a microscopic description.”

    Quantum physics problem proved unsolvable: Gödel and Turing enter quantum physics – December 9, 2015
    Excerpt: A mathematical problem underlying fundamental questions in particle and quantum physics is provably unsolvable,,,
    It is the first major problem in physics for which such a fundamental limitation could be proven. The findings are important because they show that even a perfect and complete description of the microscopic properties of a material is not enough to predict its macroscopic behaviour.,,,
    “We knew about the possibility of problems that are undecidable in principle since the works of Turing and Gödel in the 1930s,” added Co-author Professor Michael Wolf from Technical University of Munich. “So far, however, this only concerned the very abstract corners of theoretical computer science and mathematical logic. No one had seriously contemplated this as a possibility right in the heart of theoretical physics before. But our results change this picture. From a more philosophical perspective, they also challenge the reductionists’ point of view, as the insurmountable difficulty lies precisely in the derivation of macroscopic properties from a microscopic description.”
    http://phys.org/news/2015-12-q.....godel.html

    In other words, even with a complete microscopic description of an organism, it is impossible for you to have ‘absolute’ certainty about the macroscopic behavior of that organism much less to have ‘absolute’ certainty about CD.

  41. 41
    gpuccio says:

    Bornagain77:

    It’s amazing how much you misunderstand me, even if I have repeatedly tried to explain my views to you.

    1) “Interesting claim of absolute certainty from you given the discontinuous nature of the fossil record, the discontinuous nature of the genetic evidence, and the fact that no one has ever changed the basic body plan of an organism into another body plan.”

    Interesting claims, that have nothing to do with my belief in CD, and about which I can absolutely agree with you. I absolutely believe that the fossil record is discontinuous, that genetic evidence is discontinuous, and that no one has ever changed the basic body plan of an organism into another body plan. And so?

    2) “Perhaps, given your poverty of empirical warrant, a bit more modest measure of certainty would be wise on your part?”

    I don’t believe that scientific certanty is ever absolute. I use “absolutely” to express my strength of certainty that there is empirical warrant of CD. And I have explained why, many times, even to you. As I have explained many times to you what I mean by CD. But I am not sure that you really listen to me. That’s OK, I believe in free will, as you probably know.

    3) “For instance, it seems you are holding somewhat to a reductive materialistic framework in your ‘absolute’ certainty about CD, and yet, the failure of reductive materialism to be able to explain the basic form and/or body plan of any particular organism occurs at a very low level. Much lower than DNA itself.”

    I am in no way a reductionist, least of all a materialist. My certainty about CD only derives from scientific facts, and from what I believe to be the most reasonable way to interpret them. As I have tried to explain many times.

    Essentially, the reasons why I believe in CD (again, the type of CD that I believe in, and that I have tried to explain to you many times) are essentially of the same type for which I believe in Intelligent Design. There is nothing reductionist or materialist in them. Only my respect for facts.

    For example, I do believe that we do not understand at all how body plans are implemented. You seem to know more. I am happy for you.

    4) “In other words, even with a complete microscopic description of an organism, it is impossible for you to have ‘absolute’ certainty about the macroscopic behavior of that organism much less to have ‘absolute’ certainty about CD.”

    I have just stated that IMO we don’t understand at all how body plans are implemented. Moreover, I don’t believe at all that we have any complete microscopic description of any living organsism. We are absolutely (if you allow the word) distant from that. OK. But I still don’t understand what that has to do with CD.

    For the last time: CD, for me, just means that there is very strong evidence that the molecular information in DNA and proteins of already existing species is physically passed on to new species that by design derive from them. All the new information is designed in the process, but the old information is physically passed on. As proven by the differences in neutral sites, between species.

    I hope this is the last time I have to tell you that.

  42. 42
    bornagain77 says:

    “For the last time: CD, for me, just means that there is very strong evidence that the molecular information in DNA and proteins of already existing species is physically passed on to new species that by design derive from them. All the new information is designed in the process, but the old information is physically passed on. As proven by the differences in neutral sites, between species.
    I hope this is the last time I have to tell you that.”

    To this in particular,,, “passed on to new species that by design derive from them. All the new information is designed in the process, but the old information is physically passed on.”

    All new information is ‘designed in the process”???? Please elaborate on exactly what process you are talking about.

    As to examples that falsify the common descent model:

    Third, for the actual, real species data, the dependency graph model is astronomically superior compared to the common descent model.

    New Paper by Winston Ewert Demonstrates Superiority of Design Model – Cornelius Hunter – July 20, 2018
    Excerpt: Ewert’s three types of data are: (i) sample computer software, (ii) simulated species data generated from evolutionary/common descent computer algorithms, and (iii) actual, real species data.
    Ewert’s three models are: (i) a null model which entails no relationships between any species, (ii) an evolutionary/common descent model, and (iii) a dependency graph model.
    Ewert’s results are a Copernican Revolution moment. First, for the sample computer software data, not surprisingly the null model performed poorly. Computer software is highly organized, and there are relationships between different computer programs, and how they draw from foundational software libraries. But comparing the common descent and dependency graph models, the latter performs far better at modeling the software “species.” In other words, the design and development of computer software is far better described and modeled by a dependency graph than by a common descent tree.
    Second, for the simulated species data generated with a common descent algorithm, it is not surprising that the common descent model was far superior to the dependency graph. That would be true by definition, and serves to validate Ewert’s approach. Common descent is the best model for the data generated by a common descent process.
    Third, for the actual, real species data, the dependency graph model is astronomically superior compared to the common descent model.
    Where It Counts
    Let me repeat that in case the point did not sink in. Where it counted, common descent failed compared to the dependency graph model. The other data types served as useful checks, but for the data that mattered — the actual, real, biological species data — the results were unambiguous.
    Ewert amassed a total of nine massive genetic databases. In every single one, without exception, the dependency graph model surpassed common descent.
    Darwin could never have even dreamt of a test on such a massive scale. Darwin also could never have dreamt of the sheer magnitude of the failure of his theory. Because you see, Ewert’s results do not reveal two competitive models with one model edging out the other.
    We are not talking about a few decimal points difference. For one of the data sets (HomoloGene), the dependency graph model was superior to common descent by a factor of 10,064. The comparison of the two models yielded a preference for the dependency graph model of greater than ten thousand.
    Ten thousand is a big number. But it gets worse, much worse.
    Ewert used Bayesian model selection which compares the probability of the data set given the hypothetical models. In other words, given the model (dependency graph or common descent), what is the probability of this particular data set? Bayesian model selection compares the two models by dividing these two conditional probabilities. The so-called Bayes factor is the quotient yielded by this division.
    The problem is that the common descent model is so incredibly inferior to the dependency graph model that the Bayes factor cannot be typed out. In other words, the probability of the data set, given the dependency graph model, is so much greater than the probability of the data set given the common descent model, that we cannot type the quotient of their division.
    Instead, Ewert reports the logarithm of the number. Remember logarithms? Remember how 2 really means 100, 3 means 1,000, and so forth?
    Unbelievably, the 10,064 value is the logarithm (base value of 2) of the quotient! In other words, the probability of the data on the dependency graph model is so much greater than that given the common descent model, we need logarithms even to type it out. If you tried to type out the plain number, you would have to type a 1 followed by more than 3,000 zeros. That’s the ratio of how probable the data are on these two models!
    By using a base value of 2 in the logarithm we express the Bayes factor in bits. So the conditional probability for the dependency graph model has a 10,064 advantage over that of common descent.
    10,064 bits is far, far from the range in which one might actually consider the lesser model. See, for example, the Bayes factor Wikipedia page, which explains that a Bayes factor of 3.3 bits provides “substantial” evidence for a model, 5.0 bits provides “strong” evidence, and 6.6 bits provides “decisive” evidence.
    This is ridiculous. 6.6 bits is considered to provide “decisive” evidence, and when the dependency graph model case is compared to comment descent case, we get 10,064 bits.
    But It Gets Worse
    The problem with all of this is that the Bayes factor of 10,064 bits for the HomoloGene data set is the very best case for common descent. For the other eight data sets, the Bayes factors range from 40,967 to 515,450.
    In other words, while 6.6 bits would be considered to provide “decisive” evidence for the dependency graph model, the actual, real, biological data provide Bayes factors of 10,064 on up to 515,450.
    We have known for a long time that common descent has failed hard. In Ewert’s new paper, we now have detailed, quantitative results demonstrating this. And Ewert provides a new model, with a far superior fit to the data.
    https://evolutionnews.org/2018/07/new-paper-by-winston-ewert-demonstrates-superiority-of-design-model/

    Response to a Critic: But What About Undirected Graphs? – Andrew Jones – July 24, 2018
    Excerpt: The thing is, Ewert specifically chose Metazoan species because “horizontal gene transfer is held to be rare amongst this clade.” Likewise, in Metazoa, hybridization is generally restricted to the lower taxonomic groupings such as species and genera — the twigs and leaves of the tree of life. In a realistic evolutionary model for Metazoa, we can expect to get lots of “reticulation” at lower twigs and branches, but the main trunk and branches ought to have a pretty clear tree-like form. In other words, a realistic undirected graph of Metazoa should look mostly like a regular tree.
    https://evolutionnews.org/2018/07/response-to-a-critic-but-what-about-undirected-graphs/

    This Could Be One of the Most Important Scientific Papers of the Decade – July 23, 2018
    Excerpt: Now we come to Dr. Ewert’s main test. He looked at nine different databases that group genes into families and then indicate which animals in the database have which gene families. For example, one of the nine databases (Uni-Ref-50) contains more than 1.8 million gene families and 242 animal species that each possess some of those gene families. In each case, a dependency graph fit the data better than an evolutionary tree.
    This is a very significant result. Using simulated genetic datasets, a comparison between dependency graphs and evolutionary trees was able to distinguish between multiple evolutionary scenarios and a design scenario. When that comparison was done with nine different real genetic datasets, the result in each case indicated design, not evolution. Please understand that the decision as to which model fit each scenario wasn’t based on any kind of subjective judgement call. Dr. Ewert used Bayesian model selection, which is an unbiased, mathematical interpretation of the quality of a model’s fit to the data. In all cases Dr. Ewert analyzed, Bayesian model selection indicated that the fit was decisive. An evolutionary tree decisively fit the simulated evolutionary scenarios, and a dependency graph decisively fit the computer programs as well as the nine real biological datasets.
    http://blog.drwile.com/this-co.....he-decade/

    Why should mitochondria define species? – 2018
    Excerpt: The particular mitochondrial sequence that has become the most widely used, the 648 base pair (bp) segment of the gene encoding mitochondrial cytochrome c oxidase subunit I (COI),,,,
    The pattern of life seen in barcodes is a commensurable whole made from thousands of individual studies that together yield a generalization. The clustering of barcodes has two equally important features: 1) the variance within clusters is low, and 2) the sequence gap among clusters is empty, i.e., intermediates are not found.,,,
    Excerpt conclusion: , ,The simple hypothesis is that the same explanation offered for the sequence variation found among modern humans applies equally to the modern populations of essentially all other animal species. Namely that the extant population, no matter what its current size or similarity to fossils of any age, has expanded from mitochondrial uniformity within the past 200,000 years.,,,
    https://phe.rockefeller.edu/news/wp-content/uploads/2018/05/Stoeckle-Thaler-Final-reduced.pdf

    Sweeping gene survey reveals new facets of evolution – May 28, 2018
    Excerpt: Darwin perplexed,,,
    And yet—another unexpected finding from the study—species have very clear genetic boundaries, and there’s nothing much in between.
    “If individuals are stars, then species are galaxies,” said Thaler. “They are compact clusters in the vastness of empty sequence space.”
    The absence of “in-between” species is something that also perplexed Darwin, he said.
    https://phys.org/news/2018-05-gene-survey-reveals-facets-evolution.html

  43. 43
    gpuccio says:

    Bornagain77 at #42:

    “All new information is ‘designed in the process”???? Please elaborate on exactly what process you are talking about.”

    It should be clear. However, let’s try again.

    Let’s say that there are 3 main models for how functional information comes into existence in biological beings.

    a) Descent with modifications generated by RV + NS: this is the neo-darwinian model. I absoluetly (if you allow the word) reject it. So do you, I suppose.

    b) Descent with designed modifications: this is my model. This is the process I refer to: a process of design, of engineering, which derives new species from what already exists.

    The important point, that justifies the term “descent”, is that, as I have said, the old information that is appropriate is physically passed on from the ancestor to the new species. All the rest, the new functional information, is engineered in the design process.

    So, to be more clear, let’s say that species B appears in natural history at time T. Before it, there exists another species, A, which has some strong similarities to species B.

    Let’s say that, according to my model, species B derives physically from the already existing species A. How doe it happen?

    Let’s say that, just as an imaginary example, A and B share about 50% of protein coding genes. The proteins coded by these genes are very similar in the two species, almost identical, at least at the beginning. The reason for that is that the function implemented by those proteins in the two species are extremely similar.

    But that is only part of the game. Of course, B has a lot of new proteins, or parts of proteins, or simply regulatory parts of the genome, that are not the same as A at all. Those sequences are absolutely funtional, but they do things that are specific to B, and do not exist in A, In the same way, many specific functions of A are not needed in B, and so they are not implemented there.

    Now, losing some proteins or some functions is not so difficult. We know that losing information is a very easy task, and requires no special ability.

    But how does all that new functional information arise in B? It did not exist in A, or in any other living organism that existed before to time T. It arises in B for the first time, and approximately at time T.

    The obvious answer, in my model, is: it is newly designed functional information. If I did not believe that, I would be in the other field, and not here in ID.

    But the old information, the sequence information that retains its function from A to B? Well, in my model, very simply, it is physically passed on from A to B.That is the meaning of descent in my model. That’s what makes A an ancestor of B, even if a completely new process of design and engineering is necessary to derive B from A.

    Now, you may ask: how does that happen? Of course, we don’t know the details, but we know three important facts:

    1) There are signatures of neutral variation in the conserved sequences, grossly proportional to the evolutionary time split, that demonstrate that they are physically passed on. This is the single most important argument in favour of descent, and I am aware of no possible explanation of this fact outside of physical descent of those sequences.

    2) The new functional information arises often in big jumps, and is almost always very complex. For the origin of vertebrates, I have computed about 1.7 million bits of new functional information, arising is at most 20 million years. RV + NS couild never do that, because it totally lacks the necessary probabilistic resources.

    3) The fossil record and the existing genomes and proteomes show no trace of the many functional intermediates that would be necessary for RV + NS to even try something. Therefore, RV + NS did not do it, because there is no trace of what should absolutely be there.

    So, how did design do it, with physical descent?

    Let’s say that we can imagine us doing it. If we were able. What would we do?

    It’s very simple: we would take a few specimens of A, bring them to some lab of ours, and work on them to engineer the new species with our powerful means of genetic engineering. Adding the new functional information to what already exists, and can still be functional in the new project.

    Where? And in what time?

    These are good questions. They are good questions in any case, even if you stick to your (I think) model, model c, soon to be described.

    Because species B does appear at time T. And that must happen somewhere. And that must happen in some time window.

    But the details are still to be understood. We know too little.

    But one thing is certain: both space and time are somehow restricted.

    Space is restricted, because of course the new species must appear somewhere. It does not appear at once all over the globe.

    But there is more. Model a, the neo-darwinian model, needs a process that takes place almost everywhere. Why? Because it badly needa as many probabilistic resources as possible. IOWs, it badly needs big numbers.

    Of course, we know very well that no reasonable big number will do. The probabilstic resources simply are not there. Even for bacteria crowding the whole planet for 5 billion years.

    But with small populations, any thought if RV and NS is blatantly doomed from the beginning.

    But design does not work that way. Design does not need big numbers, big populations. Especially if it is mainly top down engineering.

    So, we could very well engineer B working on a relativel small sample of A. In our lab.

    In what time? I really don’t know, but certainly not too much. As you well know, those information jumps are rather sudden in natural history, This is a fact.

    So? I minute? 1 year? 1 million year? Interesting questions, but in the end it is not much anyway.

    Not instantaneously, I would say. Not in model b, anyway. If it is an engineering process, it needs time, anyway.

    So, what is important about this model?

    Simply that it is the best model that explains facts.

    1) The signatures of neutral variation in conserved sequences are perfectly explained. As those sequences have been passed on as they are fron A to B, they keep those signatures. IOWs, if A has existed for 100 million years from some previous split, in those 100 milllion years neutral variation happens in the sequence, and differentiates that sequence in A from some homologous sequence in A1 (the organsim derived from that old split). So, B inherits those changes from A, and if we compare B and A1, we find those differences, as we find them if we compare A and A1. The differences in B are inherited from A as it was 100 million years after the split from A1.

    2) The big jumps in functional information are, of course, explained by the design process, the only type of process that can do those things.

    3) There is no need for functional intermediates in the fossil record or in the genomes. What happens in the lab does not leave traces. We do not need big intermediate populations to be expanded by positive NS, to gain new huge probabilistic resources (as in model a). We just need a few samples, a few intermediates, in a limited time and space. There is no reason to expect any relevant trace from that process.

    Of course, the new engineered species, when it is ready and working, is released into the generla environment. IOWs, it is “published”. That’s what we observe in the fossil record, and in the genomes: the release of the new engineered species. Nothing else.

    So, model b, my model, explains all three types of observed facts.

    c) No descent at all. This is, I believe, your model.

    What does that mean?

    Well, it can mean sudden “creation” (if the new species appears of of thin air, from nothing), or, more reasonably, engineering from scratch.

    I will not discuss the “creation” aspect. I would not know what to say, from a scientific point of view.

    But I will discuss the “engineering from scratch” model.

    However it is conceived (quick or slow, sudden or gradual), it implies one simple thing: each time, everything is re-engineered from scratch. Even what had already been engineered in previously existing species.

    From what? It’s simple. If it is not creation ex nihilo, “scratch” here can mean only one thing: from inanimated matter.

    IOWs, it means re-doing OOl each time a new species originates.

    OK, I believe there are many arguments against that model, but I will just state here the simplest: it does not explain fact 1)

    Moreover, I would definitely say that all your arguments against descent, however good (IMO, some are good, some are not) are always arguments agains model a). They have no relevance at all against model b), my model.

    Once and for all, I absolutely (if you allow the word) reject model a).

    That said, I am rather sure that you will stick to your model, model c). That’s fine for me. But I wanted to clarify as much as possible.

  44. 44
    bornagain77 says:

    What is the falsification criteria of your model? It seems you are lacking a rigid criteria. Not to mention lacking experimental warrant that what you propose is even possible.

    “No descent at all. This is, I believe, your model.”

    I do not believe in UCD, but I do believe in diversification from an initially created “kind” by devolutionary processes. i.e. Behe “Darwin Devolves” and Sanford “Genetic Entropy”.

    I note, especially in the Cambrian, we are talking gargantuan jumps in the fossil record. Your model is not parsimonious to such gargantuan jumps.

    Moreover, your genetic evidence is not nearly as strong as you seem to think it is. And even if it were, it is not nearly enough to explain ‘biological form’. For that you need to incorporate recent finding from quantum biology:

    How Quantum Mechanics and Consciousness Correlate – video (how quantum information theory and molecular biology corelate – 23 minute mark)
    https://www.youtube.com/watch?v=4f0hL3Nrdas

    Darwinian Materialism vs. Quantum Biology – Part II – video
    https://www.youtube.com/watch?v=oSig2CsjKbg

  45. 45
    bornagain77 says:

    correct time mark is 27 minute mark

    How Quantum Mechanics and Consciousness Correlate – video (how quantum information theory and molecular biology correlate – 27 minute mark)
    https://youtu.be/4f0hL3Nrdas?t=1634

  46. 46
    gpuccio says:

    Bornagain77:

    I quote myself:

    “That said, I am rather sure that you will stick to your model, model c). That’s fine for me. But I wanted to clarify as much as possible.”

    The only thing in my model that explains biological form is design. Maybe it is not enough, but it is certainly necessary.

    I want to be clear: I agree with you about the importance of consciousness and of quantum mechanics. But what has that to do with my argument?

    Do you believe that functional information is designed? I do. Design comes from consciousness. Consciousness interacts with matter through some quantum interface. That’s exactly what I believe.

    My model is not parsimonious and requires gargantuan jumps? Is it worse than the initial creation of kinds?

    However, for me we can leave it at that. As explained, I was not even implying CD in my initial discussion here.

  47. 47
    bornagain77 says:

    as to:

    1) There are signatures of neutral variation in the conserved sequences, grossly proportional to the evolutionary time split, that demonstrate that they are physically passed on. This is the single most important argument in favour of descent, and I am aware of no possible explanation of this fact outside of physical descent of those sequences.,,,
    OK, I believe there are many arguments against that model, but I will just state here the simplest: it does not explain fact 1)

    Again, the argument is not nearly as strong as you seem to think it is: Particularly You could say that the heart of this “shared error” argument is the idea that “lightning doesn’t strike twice.” The identical, harmful mutations, in different species, could not have arisen independently. Instead they must have arisen only once, and then were inherited from a common ancestor.

    The problem, of course, there is no reason to make this assumption. The logic made sense for written documents, but the species are not ancient manuscripts or homework assignments. They are species, and species are different.

    In fact repeated designs found in otherwise distant species are ubiquitous in biology. Listening to evolutionists one would think the species fall into an evolutionary pattern with a few minor exceptions here and there. But that is overwhelmingly false. From the morphological to the molecular level, repeated designs are everywhere, and they take on many different forms.

    Shared Errors: An Open Letter to BioLogos on the Genetic Evidence, Cont.
    Cornelius Hunter – June 1, 2016
    In recent articles (here, here and here) I have reviewed BioLogos Fellow Dennis Venema’s articles (here, here and here) which claimed that (1) the genomes of different species are what we would expect if they evolved, and (2) in particular the human genome is compelling evidence for evolution.

    Venema makes several confident claims that the scientific evidence strongly supports evolution. But as I pointed out Venema did not reckon with an enormous body of contradictory evidence. It was difficult to see how Venema could make those claims. Fortunately, however, we were able to appeal to the science. Now, as we move on to Venema’s next article, that will all change.

    In this article, Venema introduces a new kind of genetic evidence for evolution. Again, Venema’s focus is on, but not limited to, human evolution. Venema’s argument is that harmful mutations shared amongst different species, such as the human and chimpanzee, are powerful and compelling evidence for evolution. These harmful mutations disable a useful gene and, importantly, the mutations are identical.

    Are not such harmful, shared mutations analogous to identical typos in the term papers handed in by different students, or in historical manuscripts? Such typos are telltale indicators of a common source, for it is unlikely that the same typo would have occurred independently, by chance, in the same place, in different documents. Instead, the documents share a common source.

    Now imagine not one, but several such typos, all identical, in the two manuscripts. Surely the evidence is now overwhelming that the documents are related and share a common source.

    And just as a shared, identical, typos are a telltale indicator of a common source, so too must shared harmful mutations be proofs of a common ancestor. It is powerful and compelling evidence for common descent. It is, explains Venema, “one of the strongest pieces of evidence in favor of common ancestry between humans and chimpanzees (and other organisms).”

    There is only one problem. As we have explained so many times, the argument is powerful because the argument is religious. This isn’t about science.

    The Evidence Does Not Support the Theory

    The first hint of a problem should be obvious: harmful mutations are what evolution is supposed to kill off. The whole idea behind evolution is that improved designs make their way into the population via natural selection, and by the same logic natural selection (or purifying selection in this case) filters out the harmful changes. Therefore finding genetic sequence data that must be interpreted as harmful mutations weighs against evolutionary theory.

    Also, there is the problem that any talk of how a gene proves evolutionary theory is avoiding the problem that evolution fails to explain how genes arose in the first place. Evolution claiming proof in the details of gene sequences seems to be putting the cart before the horse.

    No Independent Changes

    You could say that the heart of this “shared error” argument is the idea that “lightning doesn’t strike twice.” The identical, harmful mutations, in different species, could not have arisen independently. Instead they must have arisen only once, and then were inherited from a common ancestor.

    The problem, of course, there is no reason to make this assumption. The logic made sense for written documents, but the species are not ancient manuscripts or homework assignments. They are species, and species are different.

    In fact repeated designs found in otherwise distant species are ubiquitous in biology. Listening to evolutionists one would think the species fall into an evolutionary pattern with a few minor exceptions here and there. But that is overwhelmingly false. From the morphological to the molecular level, repeated designs are everywhere, and they take on many different forms.

    The problem is that these repeated designs appear in species so distant that, according to evolutionary theory, their common ancestor could not have had that design. The human and squid have similar vision systems, but their purported common ancestor, a much simpler and more ancient organism, would have had no such vision system. Evolutionists are forced to say that incredibly complex designs must have arisen, yes, repeatedly and independently.

    And this must have occurred over and over in biology. It would be a challenge simply to document all of the instances in which evolutionists agreed to an independent origins. For evolutionists then to insist that similar designs in allied species can only be explained by common descent amounts to having it both ways.

    Bad Designs

    This “shared error” argument also relies on the premise that the structures in question are bad designs. In this case, the mutations are “harmful,” and so the genes are “broken.” And while that may well be true, it is a premise with a very bad track record. The history of evolutionary thought is full of claims of bad, inefficient, useless designs which, upon further research were found to be, in fact, quite useful. Simply from a history of science perspective, this is a dangerous argument to be making.

    Epicureanism

    The “shared error” argument is bad science and bad history, but it remains a very strong argument. This is because its strength does not come from science or history, but rather from religion. As I have explained many times, evolution is a religious theory, and the “shared error” argument is no different. This is why the scientific and historical problems don’t matter. Venema explains:

    The fact that different mammalian species, including humans, have many pseudogenes with multiple identical abnormalities (mutations) shared between them is a problem for any sort of non-evolutionary, special independent creation model.

    This is a religious argument, evolution as a referendum on a “special independent creation model.” It is not that the species look like they arose by random chance, it is that they do not look like they were created. Venema and the evolutionists are certain that God wouldn’t have directly created this world. There must be something between the Creator and creation — a Plastik Nature if you will. And if Venema and the evolutionists are correct in their belief then, yes, evolution must be true. Somehow, some way, the species must have arisen naturalistically.

    This argument is very old. In antiquity it drove the Epicureans to conclude the world must have arisen on its own by random motion. Today evolutionists say the same thing, using random mutations as their mechanism.

    Needed: An Audit

    Darwin’s book was loaded with religious arguments. They were the strength of his otherwise weak thesis, and they have always been the strength behind evolutionary thought. No longer can we appeal to the science, for it is religion that is doing the heavy lifting.

    Yet evolutionists claim the high ground of objective, empirical reasoning. Venema admits that some other geneticists do not agree with this “shared error” argument but, he warns, they do so “for religious reasons.”

    We have also seen this many times. Evolutionists make religious claims and literally in the next moment lay the blame on the other guy. This is the world according to the Warfare Thesis. We need an audit of our thinking.
    https://evolutionnews.org/2016/06/shared_errors_a/

    and

    In Arguments for Common Ancestry, Scientific Errors Compound Theoretical Problems
    Evolution News | @DiscoveryCSC
    May 16, 2016
    (6) Swamidass points to pseudogenes as evidence for common ancestry, even though many pseudogenes show evidence of function, including the vitellogenin pseudogene that Swamidass cites.

    Swamidass repeatedly cites Dennis Venema’s arguments for common ancestry based upon pseudogenes. However, as we’ve discussed here in the past, quite a few pseudogenes have turned out to be functional, and we’re discovering more all the time. It’s only recently that we’ve had the technology to study the functions of pseudogenes, so we are just at the beginning of doing so. While it’s true that there’s a lot about pseudogenes we still don’t know, an RNA Biology paper observes, “The study of functional pseudogenes is just at the beginning.” And it predicts that “more and more functional pseudogenes will be discovered as novel biological technologies are developed in the future.” The paper concludes that functional pseudogenes are “widespread.” Indeed, when we carefully study pseudogenes, we often do find function. One paper in Annual Review of Genetics tellingly observed: “Pseudogenes that have been suitably investigated often exhibit functional roles.”

    One of Swamidass’s central examples mirrors Dennis Venema’s argument that the vitellogenin pseudogene in humans demonstrates we’re related to egg-laying vertebrates like fish or reptiles. But a Darwin-doubting scientist was willing to dig deeper. Good genetic evidence now indicates that what Dennis Venema calls the “human vitellogenin pseudogene” is really part of a functional gene, as one technical paper by an ID-friendly creationist biologist has shown.
    https://evolutionnews.org/2016/05/in_arguments_fo/

  48. 48
    gpuccio says:

    Bornagain77:

    My argument is not about shared errors. It is about neutral mutations at neutral sites, grossly proportional to evolutionary split times. It is about the ka/ks ratio and the saturation of neutral sites after a few hundred million years. I have made the argument in great detail in the past, with examples, but I have no intention to repeat all the work now.

    By the way, I would be cautious in accepting everything that Cornelius Hunter says, as you seem to do. I agree with him many times. But many other times I fully disagree.

  49. 49
    bornagain77 says:

    “I would be cautious in accepting everything that Cornelius Hunter says, as you seem to do. I agree with him many times. But many other times I fully disagree.”

    Like when he contradicts you? 🙂

    Though you tried to downplay it, your argument from supposedly ‘neutral variations’ is VERY similar to the shared error argument. As such, for reasons listed above, it is not nearly as strong as you seem to presuppose.

    It is apparent that you believe the variations were randomly generated and therefore you are basically claiming that “lightning doesn’t strike twice”, which is exactly the argument that Dr. Hunter critiqued.

    Moreover, If anything we now have far more evidence of mutations being ‘directed’ than we do of them being truly random.

    You said you could think of no other possible explanation, I hold that directed mutations are a ‘other possible explanation’ that is far more parsimonious to the overall body of evidence than your explanation of a Designer, i.e. God, creating a brand new species without bothering to correct supposed neutral variations and/or supposed shared errors.

  50. 50
    gpuccio says:

    Bornagain77:

    I disagree with Cornelius Hunter when I think he is wrong. In that sense, I treat him like anyone else. You seem to believe that he is always right. I don’t. Many times I have found that he is wrong in what he says.

    And no, my argument about neutral variation has nothing to do wiith the argument of shared errors. And with the idea that “lightning doesn’t strike twice”. My argument is about differences, not similarities, I think you don’t understand it. But that’s not a problem.

  51. 51
    bornagain77 says:

    No, I do not think Dr. Cornelius Hunter is ALWAYS right. But I certainly think he is right in his critique of Swamidass. Whereas I don’t think you are always wrong. I just think you are, in this instance, severely mistaken in one or more of your assumptions behind your belief in common descent.

    Your model is, from what I can tell, severely convoluted. If you presuppose randomness in your model at any instance prior to the design input from God to create a new family of species.,, that is one false assumption that would undermine your claim. I can provide references if need be.

  52. 52
    gpuccio says:

    To all:

    As usual, the levels of regulation and crosstalk of this NF-kB system grow each time I make a Pubmed search.

    We are all interested, of course, in long non coding RNAs. Well, this paper is about their role in NF-kB signaling:

    Lnc-ing inflammation to disease

    https://www.ncbi.nlm.nih.gov/pubmed/28687714

    Abstract
    Termed ‘master gene regulators’ long ncRNAs (lncRNAs) have emerged as the true vanguard of the ‘noncoding revolution’. Functioning at a molecular level, in most if not all cellular processes, lncRNAs exert their effects systemically. Thus, it is not surprising that lncRNAs have emerged as important players in human pathophysiology. As our body’s first line of defense upon infection or injury, inflammation has been implicated in the etiology of several human diseases. At the center of the acute inflammatory response, as well as several pathologies, is the pleiotropic transcription factor NF-??. In this review, we attempt to capture a summary of lncRNAs directly involved in regulating innate immunity at various arms of the NF-?? pathway that have also been validated in human disease. We also highlight the fundamental concepts required as lncRNAs enter a new era of diagnostic and therapeutic significance.

    The paper, unfortunately, is not open access. It is interesting, however, than lncRNAs are now considered “‘master gene regulators”.

  53. 53
    gpuccio says:

    Bornagain77:

    OK, it’s too easy to be right in criticizing Swamidass! 🙂 (Just joking, just joking… but not too much)

    Just to answer you observations about randomness: I think that most mutations are random, unless they are guided by design. I am not sure that I understand what your point is. Do you believe they are guided? I also believe that some mutations are guided, but that is a form of design.

    If they are not guided, how can you describe the system? If you cannot describe it in terms of necessity (and I don’t think you can), some probability distribution is the only remaining option. Again, I don’t understand what you really mean.

    But of course the mutations (if they are mutations) that generate new functional information are not random at all. they must be guided, or intelligently selected.

    As you know, I cannot debate God in this context. I can only do what ID theory allows is to do: recognize events where a design inference is absolutely (if you allow the word) warranted.

  54. 54
    gpuccio says:

    Bornagain77:

    Moreover, the mechanisms described by Behe in Darwin devolves are the known mechanisms of NS. They can certainly create some diversification, but essentially they give limited advantages in very special contextx, and they are essentially very simple forms of variation, They cannot certainly explain the emergence of new species, least of all explain the emergence of new comples functional information, like new functional proteins.

    So, do you believe that all relevant functional information is generated when “kinds” are created? And when would that happen?

    Just to understand.

  55. 55
    bornagain77 says:

    Gp states

    I think that most mutations are random,

    And yet the vast majority of mutations are now known to be ‘directed’

    :How life changes itself: the Read-Write (RW) genome. – 2013
    Excerpt: Research dating back to the 1930s has shown that genetic change is the result of cell-mediated processes, not simply accidents or damage to the DNA. This cell-active view of genome change applies to all scales of DNA sequence variation, from point mutations to large-scale genome rearrangements and whole genome duplications (WGDs). This conceptual change to active cell inscriptions controlling RW genome functions has profound implications for all areas of the life sciences.
    http://www.ncbi.nlm.nih.gov/pubmed/23876611

    WHAT SCIENTIFIC IDEA IS READY FOR RETIREMENT? Fully Random Mutations – Kevin Kelly – 2014
    Excerpt: What is commonly called “random mutation” does not in fact occur in a mathematically random pattern. The process of genetic mutation is extremely complex, with multiple pathways, involving more than one system. Current research suggests most spontaneous mutations occur as errors in the repair process for damaged DNA. Neither the damage nor the errors in repair have been shown to be random in where they occur, how they occur, or when they occur. Rather, the idea that mutations are random is simply a widely held assumption by non-specialists and even many teachers of biology. There is no direct evidence for it.
    On the contrary, there’s much evidence that genetic mutation vary in patterns. For instance it is pretty much accepted that mutation rates increase or decrease as stress on the cells increases or decreases. These variable rates of mutation include mutations induced by stress from an organism’s predators and competition, and as well as increased mutations brought on by environmental and epigenetic factors. Mutations have also been shown to have a higher chance of occurring near a place in DNA where mutations have already occurred, creating mutation hotspot clusters—a non-random pattern.
    http://edge.org/response-detail/25264

    Duality in the human genome – November 28, 2014
    Excerpt: According to the researchers, mutations of genes are not randomly distributed between the parental chromosomes. They found that 60 percent of mutations affect the same chromosome set and 40 percent both sets. Scientists refer to these as cis and trans mutations, respectively. Evidently, an organism must have more cis mutations, where the second gene form remains intact. “It’s amazing how precisely the 60:40 ratio is maintained. It occurs in the genome of every individual – almost like a magic formula,” says Hoehe.
    http://medicalxpress.com/news/.....enome.html

    i.e. Directed mutations are ‘another possible explanation’.

    As to, “do you believe that all relevant functional information is generated when “kinds” are created? And when would that happen?”

    I believe in ‘top down’ creation of ‘kinds’ with genetic entropy, as outlined by Sanford and Behe, following afterwards. As to exactly where that line should be, Behe has recently revised his estimate:

    “I now believe it, (the edge of evolution), is much deeper than the level of class. I think is actually goes down to the level of family”
    Michael Behe: Darwin Devolves – video – 2019
    https://www.youtube.com/watch?v=zTtLEJABbTw
    In this bonus footage from Science Uprising, biochemist Michael Behe discusses his views on the limits of Darwinian explanations and the evidence for intelligent design in biology.

    I note that my model is Theologically modest in that I hold to traditional concepts of the omniscience of God and God creating ‘kinds’ that reproduce after themselves, whereas, humorously, your model is all over the place Theologically speaking.

    Your model, Theologically speaking, humorously reminds me of this old Johnny Cash song:

    JOHNNY CASH – ONE PIECE AT A TIME – CADILLAC VIDEO
    https://www.youtube.com/watch?v=Hb9F2DT8iEQ

  56. 56
    gpuccio says:

    Bornagain77:

    Most mutations are random. There can be no doubt about that. Of course, that does not exclude that some are directed. A directed mutation is an act of design.

    I perfectly agree with Behe that the level of necessary design intervention is at least at the family level.

    The three quotes you give have nothing to do with directed mutations and design. In particular, the author if the second one is frankly confused. He writes:

    Rather, the idea that mutations are random is simply a widely held assumption by non-specialists and even many teachers of biology. There is no direct evidence for it.

    On the contrary, there’s much evidence that genetic mutation vary in patterns. For instance it is pretty much accepted that mutation rates increase or decrease as stress on the cells increases or decreases. These variable rates of mutation include mutations induced by stress from an organism’s predators and competition, and as well as increased mutations brought on by environmental and epigenetic factors. Mutations have also been shown to have a higher chance of occurring near a place in DNA where mutations have already occurred, creating mutation hotspot clusters—a non-random pattern.

    This is simple ignorance. The existence of patterns does not mean that a system is not probabilistic. It just means that there are also necessity effects.

    He makes his error clear saying:

    “Mutations have also been shown to have a higher chance of occurring near a place in DNA where mutations have already occurred, creating mutation hotspot clusters—a non-random pattern.”

    Now, “a higher chance” is of course a probabilistic statement. A random distribution is not a distribution where all events have the same probability to happen. That is called a uniform probability distribution. If some events (like mutations near a place where mutations have already occurred) have a higher probability to occur, that is still a random distribution, one where the probability of the events is not uniform.

    Things become even worse. He writes:

    “While we can’t say mutations are random, we can say there is a large chaotic component, just as there is in the throw of a loaded dice. But loaded dice should not be confused with randomness because over the long run—which is the time frame of evolution—the weighted bias will have noticeable consequences.”

    But of course a loaded dice is a random system. Let’s say that the dice is loaded so that 1 has a higher probability to occur. So the probabilities of the six possible events, instead of being all 1/6 (uniform distribution), are, for example, 0.2 for 1 and 0.16 for all the other outcomes.

    So, the dice is loaded. And so? Isn’t that a random system?

    Of course it is. Each event is completely probabilitstic: we cannot anticipate it with a necessity rule. But the event one is more probable than the others.

    That article is simply a pile of errors and confusion. Whoever understands something about probability can easily see that.

    Unfortunately you tend to quote a lot of things, but it seems that not always you evaluate them critically.

    Again, I propose: let’s leave it at that, This discussion does not seem to lead anywhere.

  57. 57
    bornagain77 says:

    I, of course, disagree with you.

    The third article,,, “According to the researchers, mutations of genes are not randomly distributed between the parental chromosomes. They found that 60 percent of mutations affect the same chromosome set and 40 percent both sets.,,, “It’s amazing how precisely the 60:40 ratio is maintained. It occurs in the genome of every individual – almost like a magic formula,” says Hoehe.”

    That is fairly straightforward. And again, Directed mutations are ‘another possible explanation’. Your ‘convoluted’ model is not nearly as robust as you have presupposed.

  58. 58
    hazel says:

    Good post at 56, gp.

    Also, it is my understanding that when someone says “mutations are random” they mean there is no causal connection between the mutation and whatever eventual effects and possible benefits it might have for the organism. “Mutations are random” doesn’t refer to the causes of the mutations, I don’t think.

  59. 59
    ET says:

    gpuccio:

    Most mutations are random. There can be no doubt about that.

    I doubt it. I would say most are directed and only some are happenstance occurrences. See Spetner, “Not By Chance”,1997. Also Shapiro, “Evolution: a view from the 21st Century”. And:

    He [the Designer] indeed seems to have “carefully crafted” information in His species giving them the ability to respond to environmental stimuli to alter their own genome to adapt to new environments. He then evidently let them wander where they will with the ability to adapt.- Dr. Lee Spetner “the Evolution Revolution” p 108

    Just think about it- a Designer went through all of the trouble to produce various living organisms and place them on a changing planet in a changing universe. But the Designer is then going to leave it mostly to chance how those organisms cope with the changes?

    It just makes more sense that organisms were intelligently designed with the ability to adapt and evolve, albeit with genetic entropy creeping in.

  60. 60
    ET says:

    “Mutations are random” means they are accidents, errors and mistakes. They were not planned and just happened to happen due to the nature of the process. Yes, x-rays may have caused the damage that produced the errors but the changes were spontaneous and unpredictable as to which DNA sequences, if any, would have been affected.

  61. 61
    bornagain77 says:

    Excellent point at 59 ET. Isn’t Spetner’s model called the “Non-Random’ Evolutionary hypothesis?

    Spetner goes through many examples of non-random evolutionary changes that cannot be explained in a Darwinian framework.
    https://evolutionnews.org/2014/10/the_evolution_r/

    Gloves Off — Responding to David Levin on the Nonrandom Evolutionary Hypothesis
    Lee M. Spetner
    September 26, 2016
    In the book, I present my nonrandom evolutionary hypothesis (NREH) that accounts for all the evolution that has been actually observed and which is not accounted for by modern evolutionary theory (the Modern Synthesis, or MS). Levin ridicules the NREH but does not refute it. There is too much evidence for it. A lot of evidence is cited in the book, and there is considerably more that I could add. He ridicules what he cannot refute.
    Levin calls the NREH Lamarckian. But it differs significantly from Lamarkism. Lamarck taught that an animal acquired a new capability — either an organ or a modification thereof — if it had a need for it. He offered, however, no mechanism for that capability. Because Lamarck’s theory lacked a mechanism, the scientific community did not accept it. The NREH, on the other hand, teaches that the organism has an endogenous mechanism that responds to environmental stress with the activation of a transposable genetic element and often leads to an adaptive response. How this mechanism arose is obscure at present, but its operation has been verified in many species.,,,
    https://evolutionnews.org/2016/09/gloves_off_-_r/

  62. 62
    ET says:

    Thank you, bornagain77. And yes- the non-random evolutionary hypothesis featuring built-in responses to environmental cues.

  63. 63
    gpuccio says:

    Hazel:

    In a strict sense, random is a system where the events cannot be anticipated by a definite law, but can be reasonably described by a probability distribution.

    Of course, it is absolutely true that in that case “there is no causal connection between the mutation and whatever eventual effects and possible benefits it might have for the organism”. I would describe that aspect saying that the system, as a whole, is blind to those results.

    Randomness is a concept linked to our way of describing the system. Random systems, like the tossing of a coin, are in essence deterministic, but we have no way to describe them in a deterministic way.

    The only exception could be the intrinsic randomness of the wave function collapse in quantum mechanics. In the interpretations where it is really considered intrinsic.

  64. 64
    gpuccio says:

    ET:

    “I doubt it. I would say most are directed and only some are happenstance occurrences”.

    I beg to differ. Most mutations that we observe, maybe all, are random.

    Of course, if the functional information we observe in organisms was generated by mutations, those mutations were probably guided. But we cannot observe that process directly, or at least I am not aware that it has been observed.

    Instead, we observe a lot of more or less spontaneous mutations that are really random. Many of them generate diseases, often in real time.

    Radiation and toxic substances dramatically increase the rate of random mutations, and the frequency of certain diseases or malformations. We know that very well. And yet, no law can anticipate when and how those mutations will happen. We just know that they are more common. The system is still probabilistic, even if we can detect the effect of specific causes.

    I don’t know Spetner in detail, but it seems that he believes that most functional information derives from some intelligent adaptation of existing organisms.

    Again, I beg to differ. It is certainly true that “all the evolution that has been actually observed and which is not accounted for by modern evolutionary theory” needs some explanation, but the explanation is active design, not adaptation.

    I am not saying that adaptation does not exist, or does not have some important role. We can see good examples, for example in bacteria (the plasmid system, just to mention one instance).

    Of course a complex algorithm can generate some new information by computing new data that come from the environment. but the ability to adapt depends on the specific functional information that is already in the system, and has therefore very strict limitations.

    Adaptation can never generate a lot of new original functional information.

    Let’s make a simple example. ATP synthase, again.

    There is no adaptation system in bacteria that could have found the specific sequences of tha many complex components of the system. It is completely out of discussion.

    And yet, ATP synthase exists in bacteria from billion of years, and is still by far similar in humans.

    This is of course the result of design, not adaptation. The same can be said for body plans, all complex protein networks, and I agree with Behe that families of organisms are already levels of complexity that scream design. Adaptation, even for an already complex organism, cannot in any way explain those things.

    It is true that the mutations we observe are practically always random. It is true that they are often deleterious, or neutral. More often neutral or quasi neutral. We know that. We see those mutations happen all the time.

    Achondroplasia, for example, which is the most common cause of dwarfism, is a genetic disease that (I quote from Wikipedia for simplicity):

    “is due to a mutation in the fibroblast growth factor receptor 3 (FGFR3) gene.[3] In about 80% of cases this occurs as a new mutation during early development.[3] In the other cases it is inherited from one’s parents in an autosomal dominant manner.”

    IOWs, in 80% of cases the disease is due to a new mutation, one that was not present in the parents.

    If you look at the Exac site:

    http://exac.broadinstitute.org/

    you will find the biggest database of variations in the human genome.

    Random mutations that generate neutral variation are facts. They can be observed, their rate can be measured with some precision. There is absolutely no scientific reason to deny that.

    So, to sum up:

    a) The mutations we observe every day are random, often neutral, sometimes deleterious.

    b) The few cases where those mutations generate some advantage, as well argued by Behe, are cases of loss of information in complex structures that, by chance, confers some advantage in specific environments. see antibiotic resistance. All those variations are simple. None of them generates any complex functional information.

    c) The few cases of adaptation by some active mechanism that are in some way documented are very simple too. Nylonase, for example, could be one of them. The ability of viruses to change at very high rates could be another one.

    d) None of those reasonings can help explain the appearance, throughout natural history, of new complex functional information, in the form of new functional proteins and protein networks, new body plans, new functions, new regulations. None of those reasonings can explain OOL, or eukaryogenesis, or the transition to vertebrates. None of them can even start to explain ATP synthase, ot the immune system, or the nervous system in mammals. And so on, and so on.

    e) All these things can only be explained by active design.

    This is my position. This is what I firmly believe.

    That said, if you want, we can leave it to that.

  65. 65
    OLV says:

    GP @52:

    ” the levels of regulation and crosstalk of this NF-kB system grow each time I make a Pubmed search”

    Are you surprised? 🙂

    This crosstalk concept is very interesting indeed.

  66. 66
    gpuccio says:

    OLV:

    “Are you surprised?”

    No. 🙂

    But, of course, self-organization can easily explain all that! 🙂

  67. 67
    gpuccio says:

    OLV and all:

    This is another paper about lncRNAs and NF-kB:

    Long non-coding RNA: a versatile regulator of the nuclear factor-kB signalling circuit

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5343356/

    This is open access.

    SUMMARY

    The nuclear factor-kB (NF-kB) family of transcription factors play an essential role for the regulation of inflammatory responses, immune function and malignant transformation. Aberrant activity of this signalling pathway may lead to inflammation, autoimmune diseases and oncogenesis. Over the last two decades great progress has been made in the understanding of NF-kB activation and how the response is counteracted for maintaining tissue homeostasis. Therapeutic targeting of this pathway has largely remained ineffective due to the widespread role of this vital pathway and the lack of specificity of the therapies currently available. Besides regulatory proteins and microRNAs, long non?coding RNA (lncRNA) is emerging as another critical layer of the intricate modulatory architecture for the control of the NF-kB signalling circuit. In this paper we focus on recent progress concerning lncRNA-mediated modulation of the NF-kB pathway, and evaluate the potential therapeutic uses and challenges of using lncRNAs that regulate NF-kB activity.

  68. 68
    gpuccio says:

    OLV and all:

    Here is a database of known human lncRNAs:

    https://lncipedia.org/

    It includes, at present, data for 127,802 transcripts and 56,946 genes. A joy for the fans of junk DNA! 🙂

    Let’s look at one of these strange objects.

    MALAT-1 is one of the lncRNAs described in the paper at the previous post. Here is what the paper says:

    MALAT1
    Metastasis-associated lung adenocarcinoma transcript 1 (MALAT1) is a highly conserved lncRNA whose abnormal expression is considered to correlate with the development, progression and metastasis of multiple cancer types. Recently we reported the role of MALAT1 in regulating the production of cytokines in macrophages. Using PMA-differentiated macrophages derived from the human THP1 monocyte cell line, we showed that following stimulation with LPS, a ligand for the innate pattern recognition receptor TLR4, MALAT1 expression is increased in an NF-kB-dependent manner. In the nucleus, MALAT1 interacts with both p65 and p50 to suppress their DNA binding activity and consequently attenuates the expression of two NF-kB-responsive genes, TNF-a and IL-6. This finding is in agreement with a report based on in silico analysis predicting that MALAT1 could influence NF-kB/RelA activity in the context of epithelial–mesenchymal transition. Therefore, in LPS-activated macrophages MALAT1 is engaged in the tight control of the inflammatory response through interacting with NF-kB, demonstrating for the first time its role in regulating innate immunity-mediated inflammation. As MALAT1 is capable of binding hundreds of active chromatin sites throughout the human genome, the function and mechanism of action so far uncovered for this evolutionarily conserved lncRNA may be just the tip of an iceberg.

    Emphasis mine, as usual.

    Now, if we look for MALAT-1 in the database above linked, we find 52 transcripts. The first one, MALAT1:1, has a size of 12819 nucleotides. Not bad! 🙂

    342 papers quoted about this one transcript.

  69. 69
    bornagain77 says:

    Gp adamantly states,

    I beg to differ. Most mutations that we observe, maybe all, are random.

    And yet Shapiro adamantly begs to differ,,,

    “It is difficult (if not impossible) to find a genome change operator that is truly random in its action within the DNA of the cell where it works. All careful studies of mutagenesis find statistically significant non-random patterns”
    James Shapiro – Evolution: A View From The 21st Century – (Page 82)

    Noble also begs to differ

    Physiology is rocking the foundations of evolutionary biology – Denis Noble – 17 MAY 2013
    Excerpt: The ‘Modern Synthesis’ (Neo-Darwinism) is a mid-20th century gene-centric view of evolution, based on random mutations accumulating to produce gradual change through natural selection.,,, We now know that genetic change is far from random and often not gradual.,,,
    http://onlinelibrary.wiley.com.....4/abstract
    – Denis Noble – President of the International Union of Physiological Sciences

    Richard Sternberg also begs to differ

    Discovering Signs in the Genome by Thinking Outside the BioLogos Box – Richard Sternberg – March 17, 2010
    Excerpt: The scale on the x-axis is the same as that of the previous graph–it is the same 110,000,000 genetic letters of rat chromosome 10. The scale on the y-axis is different, with the red line in this figure corresponding to the distribution of rat-specific SINEs in the rat genome (i.e., ID sequences). The green line in this figure, however, corresponds to the pattern of B1s, B2s, and B4s in the mouse genome….
    *The strongest correlation between mouse and rat genomes is SINE linear patterning.
    *Though these SINE families have no sequence similarities, their placements are conserved.
    *And they are concentrated in protein-coding genes.,,,
    ,,, instead of finding nothing but disorder along our chromosomes, we are finding instead a high degree of order.
    Is this an anomaly? No. As I’ll discuss later, we see a similar pattern when we compare the linear positioning of human Alus with mouse SINEs. Is there an explanation? Yes. But to discover it, you have to think outside the BioLogos box.
    http://www.evolutionnews.org/2.....32961.html

    Beginning to Decipher the SINE Signal – Richard Sternberg – March 18, 2010
    Excerpt: So for a pure neutralist model to account for the graphs we have seen, ~300,000 random mutation events in the mouse have to match, somehow, the ~300,000 random mutation events in the rat.
    What are the odds of that?
    http://www.evolutionnews.org/2.....32981.html

    Another paper along that line,

    Recent comprehensive sequence analysis of the maize genome now permits detailed discovery and description of all transposable elements (TEs) in this complex nuclear environment. . . .
    The majority, perhaps all, of the investigated retroelement families exhibited non-random dispersal across the maize genome, with LINEs, SINEs, and many low-copy-number LTR retrotransposons exhibiting a bias for accumulation in gene-rich regions.
    http://journals.plos.org/plosg.....en.1000732

    and another paper

    PLOS Paper Admits To Nonrandom Mutation In Evolution – May 31, 2019
    Abstract: “Mutations drive evolution and were assumed to occur by chance: constantly, gradually, roughly uniformly in genomes, and without regard to environmental inputs, but this view is being revised by discoveries of molecular mechanisms of mutation in bacteria, now translated across the tree of life. These mechanisms reveal a picture of highly regulated mutagenesis, up-regulated temporally by stress responses and activated when cells/organisms are maladapted to their environments—when stressed—potentially accelerating adaptation. Mutation is also nonrandom in genomic space, with multiple simultaneous mutations falling in local clusters, which may allow concerted evolution—the multiple changes needed to adapt protein functions and protein machines encoded by linked genes. Molecular mechanisms of stress-inducible mutation change ideas about evolution and suggest different ways to model and address cancer development, infectious disease, and evolution generally.” (open access) – Fitzgerald DM, Rosenberg SM (2019) What is mutation? A chapter in the series: How microbes “jeopardize”the modern synthesis. PloS Genet 15(4): e1007995.
    https://uncommondescent.com/evolution/plos-paper-admits-to-nonrandom-mutation-in-evolution/

    And as Jonathan Wells noted, “I now know as an embryologist,,,Tissues and cells, as they differentiate, modify their DNA to suit their needs. It’s the organism controlling the DNA, not the DNA controlling the organism.”

    Ask an Embryologist: Genomic Mosaicism – Jonathan Wells – February 23, 2015
    Excerpt: humans have a “few thousand” different cell types. Here is my simple question: Does the DNA sequence in one cell type differ from the sequence in another cell type in the same person?,,,
    The simple answer is: We now know that there is considerable variation in DNA sequences among tissues, and even among cells in the same tissue. It’s called genomic mosaicism.
    In the early days of developmental genetics, some people thought that parts of the embryo became different from each other because they acquired different pieces of the DNA from the fertilized egg. That theory was abandoned,,,
    ,,,(then) “genomic equivalence” — the idea that all the cells of an organism (with a few exceptions, such as cells of the immune system) contain the same DNA — became the accepted view.
    I taught genomic equivalence for many years. A few years ago, however, everything changed. With the development of more sophisticated techniques and the sampling of more tissues and cells, it became clear that genetic mosaicism is common.
    I now know as an embryologist,,,Tissues and cells, as they differentiate, modify their DNA to suit their needs. It’s the organism controlling the DNA, not the DNA controlling the organism.
    http://www.evolutionnews.org/2.....93851.html

    And as ET pointed out, Gp’s presupposition also makes no sense theologically speaking

    Just think about it- a Designer went through all of the trouble to produce various living organisms and place them on a changing planet in a changing universe. But the Designer is then going to leave it mostly to chance how those organisms cope with the changes?
    It just makes more sense that organisms were intelligently designed with the ability to adapt and evolve, albeit with genetic entropy creeping in.

  70. 70
    ET says:

    Evolution by means of intelligent design is active design. Genetic changes don’t have to produce some perceived advantage in order to be directed. And if genetic entropy has interfered with the directed mutation function then that could also explain what you observe.

    And yes, ATP synthase was definitely intelligently designed. Why can’t it be that it was intelligently designed via some sort of real genetic algorithm?

  71. 71
    ET says:

    And those polar bears. The change in the structure of the fur didn’t happen by chance. So either the original population(s) of bears already had that variation or the information required to produce it. With that information being teased out due to the environmental changes and built-in responses to environmental cues.

  72. 72

    .
    Another excellent post GP, thank you for writing it. Reading thru it now.

    Once again, where are your anti-ID critics?

  73. 73
    gpuccio says:

    Upright BiPed:

    Hi UB, nice to hear from you! 🙂

    “Once again, where are your anti-ID critics?”

    As usual, they seem to have other interests. 🙂

    Luckily, some friends are ready to be fiercely antagonistic! 🙂 Which is good, I suppose…

  74. 74
    gpuccio says:

    ET at #70:

    Evolution by means of intelligent design is active design.

    Yes, it is.

    Genetic changes don’t have to produce some perceived advantage in order to be directed.

    Of course. That’s exactly my point. See my post #43, this statement about my model (modeol b):

    “There is no need for functional intermediates in the fossil record or in the genomes. What happens in the lab does not leave traces. We do not need big intermediate populations to be expanded by positive NS, to gain new huge probabilistic resources (as in model a). We just need a few samples, a few intermediates, in a limited time and space. There is no reason to expect any relevant trace from that process.”

    Emphasis added.

    And if genetic entropy has interfered with the directed mutation function then that could also explain what you observe.

    In my model, it does. You see, for anything to explain the differences created in time by neutral variation (my point 1 at post #43, what I call “signatures of neutral variation in the conserved sequences, grossly proportional to the evolutionary time split”), you definitely need physical continuity between different organisms. Otherwise, nothing can be explained. IOWs, neutral signatures accumulate as differences as time goes on, between there is physical continuity. Creation or design form scratch for each organism cannot explain that. This is the argument that BA seems not to understand.

    And yes, ATP synthase was definitely intelligently designed.

    Definitely.

    Why can’t it be that it was intelligently designed via some sort of real genetic algorithm?

    Because, of course, the algorithm would be by far more complex than the result. And where is that algorithm? there is absolutely no trace of it.

    It is no good to explain things with mere imagination, We need facts.

    Look, we are dealing with functional information here, not with some kind of pseudo-order that can be generated by some simple necessity laws coupled to random components. IOWs, this is not something that self-organization can even start to do.

    Of course, an algorithm could do it. If I had a super-computer already programmed with all possible knowledge about ciochemistry, and the computing ability to anticipate top down how protein sequences will fold and what biochemical activity they will have, and with a definite plan to look for some outcome that can transform a proton gradient into ATP, possibly with at least a strong starting plan that it should be something like a water mill, then yes, maybe that super-computer could be, in time, elaborate some relatively efiicient project on that basis. Of course, that whole apparatus would be much more complex than what we want to obtain. After all, ATP synthase has only a few thousand bits of functional information. Here we are discussing probably many gigabytes for the algorithm.

    That’s the problem, in the end. Functional information can be generated only in two was:

    a) Direct design by a consious, intelligen, purposeful agent. Of course that agent may have to use previous data or knowledge, but the point is that its cognitive abilities and its ability to have purposes will create those shortcuts that no non design system can generate.

    b) Indirect design through some designed system complex enough to include a good programming of how to obtain some results. As said, that can work, but it has severe limitaitons. The designed system is already very complex, and the further functional information that can be obtained is usually very limited and simple. Why? Because the system, not being open to a further intervention of conaciousness and intelligence, can only do what it has been progarmmed to do. Nothing else. The purposes are only those purposes that have already been embedded at the beginning. Nothing else.

    The computations, all the apparently “intelligent” activities, are merely passive executions of intelligent programs already designed. They can do what they have been programmed to do, but nothing else.

    So, let’s say that I want to program a system that can find a good solution for ATP-synthase. OK, I can do that (not me, of course, let’s say some very intelligent designer). But I must already be conscious that I will need ATP.synthase, ir something like that. I must put that purpose in my system. And of course all the knowledge and power needed to do what I want it to do.

    Or, of course, I can just design ATP synthase and introduce that design in the system (that I have already designed myself soem time ago) if and when it is needed.

    Which is more probably true?

    Again, facts and only facts must guide us.

    ATP synthase, in a form very similar to what we observe today, was alredy present billion of years ago, when reasonably only prokaryotes were living on our planet.

    Was a complex algorithm capable of that kind of knowledge and computations present on our planet before the appearance of ATP synthase? In what form? What fatcs have we that support such an idea

    The truth is very simple. For all that we can know and reasonably infer, at some time, very early after our plane became compatible with any form of life, ATP synthase appeared, very much similar to what it is today, in some bacterial like form of life. There is nothing to suggest, or support, or even mak credible or reasonable, that any complex algorithm capable of computing the necessary information for it was present at that time. No such algorithm, or any trace of it, exists today. If we wanted to compute ATP synthase today, we would not have the palest idea of how to do it.

    These are the simple facts. Then, anyone is free to believe as he likes. As for me, I stick to my model, and am very happy with it.

  75. 75
    gpuccio says:

    ET at #71:

    As far as I can understand, the divergence of polar bears is probably simple enough to be explained as adaptation under environmental constraints. This is not ATP synthase. Not at all.

    I don’t know the topic well, so mine is just an opinion. However, bears are part of the family Ursidae, so brown bears and polar bears are part of the same family. So, is we stick to Behe’s very reasonable idea that family is probably the level which still requres design, this is an inside family divergence.

  76. 76
    bornagain77 says:

    Gp claims:

    neutral signatures accumulate as differences as time goes on, between there is physical continuity. Creation or design form scratch for each organism cannot explain that. This is the argument that BA seems not to understand.

    To be clear, Gp is arguing for a very peculiar. even bizarre. form of UCD where God reuses stuff and does not create families de novo (which is where Behe now puts the edge of evolution). Hence my reference to Johnny Cask’s song “One Piece at a Time”

    Earlier, Gp also claimed that he could think of no other possible explanation to explain the data. I pointed out that ‘directed’ mutations are another possible explanation. Gp then falsely claimed that there are no such thing as directed mutations. Specifically he claimed, “Most mutations that we observe, maybe all, are random.”

    Gp, whether he accepts it or not, is wrong in his claim that “maybe all mutations are random”. Thus, Gp’s “Johnny Cash” model is far weaker than he imagines it to be.

    JOHNNY CASH – ONE PIECE AT A TIME – CADILLAC VIDEO
    https://www.youtube.com/watch?v=Hb9F2DT8iEQ

  77. 77
    gpuccio says:

    Bornagain77:

    “I note that my model is Theologically modest in that I hold to traditional concepts of the omniscience of God and God creating ‘kinds’ that reproduce after themselves, whereas, humorously, your model is all over the place Theologically speaking.”

    “And as ET pointed out, Gp’s presupposition also makes no sense theologically speaking”

    I have ignored this kind of objection, but as you (and ET) insist, I will say just a few words.

    I believe that you are theologically committed in your discussions about science. This is not a big statement, I suppose, because it is rather obvious in all that you say. And it is not a criticism, believe me. It is your strong choice, and I appreciate people who make strong choices.

    But, of course, I don’t feel obliged to share those choices. You see, I too make my strong choices, and I like to remain loyal to them.

    One of my strong choices is that my philosophy of science (and my theology, too) tell me that my scientific reasonings must not (as far as it is humanly possible) be influenced by my theology. In any way. So, I really strive to achieve that (and it’s not easy).

    This is, for me, an important question of principle. So, I will not answer any argument that makes any reference to theology, or even simply to God, in a scientific discussion. Never.

    So, excuse me if I will go on ignoring that kind of remarks from you or others. It’s not out of discourtesy. It’s to remain loyal to my principles.

  78. 78
    gpuccio says:

    Bornagain77 at #76:

    For “God reusing stuff”, see my previous post.

    For the rest, mutations and similar, see my next post (I need a little time to write it).

  79. 79
    EugeneS says:

    Upright Biped,

    An off-topic. You have mail as of a long time ago 🙂 I apologise for my long silence. I have changed jobs twice and have been quite under stress. Because of this I was not checking my non-business emails regularly. Hoping to get back to normal.

  80. 80
    gpuccio says:

    EugeneS:

    Hi, Eugene,

    Welcome anyway to the discussion, even for an off.topic! 🙂

  81. 81
    bornagain77 says:

    Basically I believe one of Gp’s main flaws in his model is that he believes that the genome is basically static and most all the changes to the genome that do occur are the result of randomness (save for when God intervenes at the family level to introduce ”some’ new information whilst saving parts of the genome that have accumulated changes due to randomness).

    Yet the genome is now known to be dynamic and not to be basically static.

    Neurons constantly rewrite their DNA – Apr. 27, 2015
    Excerpt: They (neurons) use minor “DNA surgeries” to toggle their activity levels all day, every day.,,,
    “We used to think that once a cell reaches full maturation, its DNA is totally stable, including the molecular tags attached to it to control its genes and maintain the cell’s identity,” says Hongjun Song, Ph.D.,, “This research shows that some cells actually alter their DNA all the time, just to perform everyday functions.”,,,
    ,,, recent studies had turned up evidence that mammals’ brains exhibit highly dynamic DNA modification activity—more than in any other area of the body,,,
    http://medicalxpress.com/news/.....e-dna.html

    A Key Evidence for Evolution Involving Mobile Genetic Elements Continues to Crumble – Cornelius Hunter – July 13, 2014
    Excerpt: The biological roles of these place-jumping, repetitive elements are mysterious.
    They are largely viewed (by Darwinists) as “genomic parasites,” but in this study, researchers found the mobile DNA can provide genetic novelties recruited as certain population-unique, functional enrichments that are nonrandom and purposeful.
    “The first shocker was the sheer volume of genetic variation due to the dynamics of mobile elements, including coding and regulatory genomic regions, and the second was amount of population-specific insertions of transposable DNA elements,” Michalak said. “Roughly 50 percent of the insertions were population unique.”
    http://darwins-god.blogspot.co.....lving.html

    Contrary to expectations, genes are constantly rearranged by cells – July 7, 2017
    Excerpt: Contrary to expectations, this latest study reveals that each gene doesn’t have an ideal location in the cell nucleus. Instead, genes are always on the move. Published in the journal Nature, researchers examined the organisation of genes in stem cells from mice. They revealed that these cells continually remix their genes, changing their positions as they progress though different stages.
    https://uncommondescent.com/intelligent-design/researchers-contrary-to-expectations-genes-are-constantly-rearranged-by-cells/

    And again, DNA is now, contrary to what is termed to be ‘the central dogma’, far more passive than it was originally thought to be. As Denis Noble stated, “The genome is an ‘organ of the cell’, not its dictator”

    “The genome is an ‘organ of the cell’, not its dictator”
    – Denis Noble – President of the International Union of Physiological Sciences

    Another main flaw in Gp’s ‘Johnny Cash model’, and as has been pointed out already, is that he assumes ‘randomness’ to be a defining notion for changes to the genome. This is the same assumption that Darwinists make. In fact, Darwinists. on top of that, also falsely assume ‘random thermodynamic jostling’ to be a defining attribute of the actions within a cell.

    Yet, advances in quantum biology have now overturned that foundational assumption of Darwinists, The first part of the following video recalls an incident where ‘Harvard Biovisions’ tried to invoke ‘random thermodynamic jostling’ within the cell to undermine the design inference. (i.e. the actions of the cell, due to advances in quantum biology, are now known to be far more resistant to ‘random background noise’ than Darwinists had originally presupposed:)

    Darwinian Materialism vs. Quantum Biology – Part II – video
    https://www.youtube.com/watch?v=oSig2CsjKbg

    Of supplemental note:

    How Quantum Mechanics and Consciousness Correlate – video (how quantum information theory and molecular biology correlate – 27 minute mark)
    https://youtu.be/4f0hL3Nrdas?t=1634

  82. 82
    bornagain77 says:

    Gp in 77 tried to imply he was completely theologically neutral. That is impossible. Besides science itself being impossible without basic Theological presuppositions (about the rational intelligibility of the universe and of our minds to comprehend it), any discussion of origins necessarily entails Theological overtones. It simply can’t be avoided. Gp is trying to play politics instead of being honest. Perhaps next GP will try to claim that he is completely neutral in regards to breathing air. 🙂

  83. 83
    ET says:

    gpuccio:

    Because, of course, the algorithm would be by far more complex than the result. And where is that algorithm? there is absolutely no trace of it.

    Yes, the algorithm would be more complex than the structure. So what? Where is the algorithm? With the Intelligent Designer. A trace of it is in the structure itself.

    The algorithm attempts to answer the question of how ATP synthase was intelligently designed. Of course an omnipotent intelligent designer wouldn’t require that and could just design one from its mind.

  84. 84
    gpuccio says:

    Bornagain77 at #69 and #76 (and to all):

    OK, so some people apparently disagree with me. I will try to survive.

    But I would insist on the “apparently”, because again, IMO, you make some confusion in your quotes and their intepretation.

    Let’s see. At #69, you make 6 quote (excluding the internal reference to ET):

    1. Shapiro.

    I don’t think I can comment on this one. The quote is too short, and I have not the book to check the context. However, the reference to “genome change operator” is not very clear. Moreover, the reference to “statistically significant non-random patterns” could simply point to some necessity effect that modifies the probability distribution, like in the case of the loaded dice. As explained, that does not make the system “non-random”. And that has nothing to do with guidance, design or creation.

    2. Noble.
    That “genetic change is far from random and often not gradual” is obvious. It is not random because it is designed, and it is well known that it is not gradual. I perfectly agree. That has nothing to do with random mutations, because design is of course not implemented by random mutations. This is simply a criticism of model a.

    Another point is that some epigenetic modification can be inherited. Again, I have nothing against that. But of course I don’t believe that such a mechanism can create complex functional information and body plans. Neither do you, I believe. You say you believe in the “creation of kinds”.

    3. and 4. Stermberg and the PLOS paper.

    These are about transposons. I will address this topic specifically at the end of this post.

    5. The other PLOS paper.

    Here is the abstract:

    Abstract
    Mutations drive evolution and were assumed to occur by chance: constantly, gradually, roughly uniformly in genomes, and without regard to environmental inputs, but this view is being revised by discoveries of molecular mechanisms of mutation in bacteria, now translated across the tree of life. These mechanisms reveal a picture of highly regulated mutagenesis, up-regulated temporally by stress responses and activated when cells/organisms are maladapted to their environments—when stressed—potentially accelerating adaptation. Mutation is also nonrandom in genomic space, with multiple simultaneous mutations falling in local clusters, which may allow concerted evolution—the multiple changes needed to adapt protein functions and protein machines encoded by linked genes. Molecular mechanisms of stress-inducible mutation change ideas about evolution and suggest different ways to model and address cancer development, infectious disease, and evolution generally.

    This is simple. The paper, again, uses the term “random” and “not random” incorrectly. It is obvious in the first phrase. The authors complain that mutations do not occur “roughly uniformly” in the genome, and that would make them not random. But, as explained, the uniform distribution is only one of the many probability distributions that describe well natural phenomena. For example, many natural systems are well described, as well known, by a normal distribution, which has nothing to do with an uniform distribution. That does not mean that they are not random systems.

    The criticism to graduality I have already discussed: I obviously agree, but the only reason for non gradual variation is design. Indeed, neutral mutations are instead gradual, because they are not designed.

    And what’s the problem with “environmental inputs”? We know very well that environmental inputs change the rate, and often the type, of mutation. Radiations, for example, do that. We have known that for decades. That is no reason to say that mutations are not random. They are random, and environmental inputs do modify the probability distribution. A lot. Are these authors really discovering, in 2019, that a lor of leukemias were caused by the bomb in Hiroshima?

    6. Wells.

    He is discussing the interesting concept of somatic genomic variation.

    Here is the abstract of the paper to which he refers:

    Genetic variation between individuals has been extensively investigated, but differences between tissues within individuals are far less understood. It is commonly assumed that all healthy cells that arise from the same zygote possess the same genomic content, with a few known exceptions in the immune system and germ line. However, a growing body of evidence shows that genomic variation exists between differentiated tissues. We investigated the scope of somatic genomic variation between tissues within humans. Analysis of copy number variation by high-resolution array-comparative genomic hybridization in diverse tissues from six unrelated subjects reveals a significant number of intra-individual genomic changes between tissues. Many (79%) of these events affect genes. Our results have important consequences for understanding normal genetic and phenotypic variation within individuals, and they have significant implications for both the etiology of genetic diseases such as cancer and for immortalized cell lines that might be used in research and therapeutics.

    As you can see (if you can read that abstract impartially) the paper does not mention in any way anything that supports Wells’final (and rather gratuitous) statement:

    “From what I now know as an embryologist I would say that the truth is the opposite: Tissues and cells, as they differentiate, modify their DNA to suit their needs. It’s the organism controlling the DNA, not the DNA controlling the organism.”

    Indeed, the paper says the opposit: that somatic genomic variations are important to better understand “the etiology of genetic diseases such as cancer”. Why? The reason is simple: because they are random mutations, often deleterious.

    Ah, and by the way: of course somatic mutattions cannot be inherited, and therefore have no role in building the functional inforamtion in organisms.

    So, as you can see (but will not see) you are making a lot of confusion with your quotations.

    The only interesting topic is transposons. But it’s late, so I will discuss that topic later, in next post.

  85. 85
    gpuccio says:

    Bornagain77 at #82:

    Gp in 77 tried to imply he was completely theologically neutral. That is impossible.

    Emphasis mine.

    That’s unfair and not true.

    I quote myself at #77:

    “One of my strong choices is that my philosophy of science (and my theology, too) tell me that my scientific reasonings must not (as far as it is humanly possible) be influenced by my theology. In any way. So, I really strive to achieve that (and it’s not easy).”

    No comments.

    You see, the difference between your position and my position is that you are very happy to derive your scientific ideas from your theology. I try as much as possible not to do that.

    As said, both are strong choices. And I respect choices. But that’s probably one of the reasons why we cannot really communicate constructively about scientific things.

  86. 86
    gpuccio says:

    ET at #83:

    “Yes, the algorithm would be more complex than the structure. ”

    OK.

    “So what? Where is the algorithm? With the Intelligent Designer. ”

    ??? What do you mean? I really don’t understand.

    “A trace of it is in the structure itself.”

    The structure aloows us to infer design. I don’t see what in the structure points to some specific algorithm. Can you help?

    “The algorithm attempts to answer the question of how ATP synthase was intelligently designed. ”

    OK, I am not saying that the designer did not use any algorithm. Maybe the designer is there in his lab, and has a lot of computers working fot him in the process. But:

    a) He probably designed the computers too

    b) His conscious cognition is absolutely necessary to reach the results. Computers do the computations, but it’s consciousness that defines puproses, and finds strategies.

    And however, design happens when the functional information is inputted into the material object we observe. So, if the designer inputs information after having computed it in his lab. that is not really relevant.

    I though that your mention of an algorithm meant something different. I thought you meant that the designer designs an algorithm and put it in some existing organism (or place), and tha such algorithm them compute ATP synthase or what else. So, if that is your idea, again I ask: what facts support the existence of such an independent physical algorithm in physical reality?

    The answer is simple enough: none at all.

    ” Of course an omnipotent intelligent designer wouldn’t require that and could just design one from its mind.”

    I have no idea if the biological designer is omnipotent, or if he designs things from his mind alone, or if he uses computers or watches or anything else in the process. I only know that he designs biological things, and must be conscious, intelligent and purposeful.

  87. 87
    bornagain77 says:

    Gp 77 and 85 disingenuously claims that he is the one being ‘scientific’ while trying, as best he can, to keep God out of his science. Hogwash! His model specifically makes claims as to what he believes the designer, i.e. God, is and is not doing. i.e. Johnny Cash’s ‘One Piece at a Time”.

    Perhaps Gp falsely believes that if he compromises his theology enough that he is somehow being more scientific than I am? Again Hogwash. As I have pointed out many times, assuming Methodologcal Naturalism as a starting assumption, (as Gp seems bent on doing in his model as far as he can do it without invoking God), results in the catastrophic epistemological failure of science itself. (See bottom of post for refutation of methodological naturalism)

    Bottom line, Gp, instead of being more scientific than I, as he is falsely trying to imply (much like Darwinists constantly try to falsely imply), has instead produced a compromised, bizarre, and convoluted, model. A model that IMHO does not stand up to even minimal scrutiny. And a model that no self respecting Theist or even Darwinist would ever accept as being true. A model that, as far as I can tell, apparently only Gp himself accepts as being undeniably true..

    As I have pointed out several times now, assuming Naturalism instead of Theism as the worldview on which all of science is based leads to the catastrophic epistemological failure of science itself.

    Basically, because of reductive materialism (and/or methodological naturalism), the atheistic materialist is forced to claim that he is merely a ‘neuronal illusion’ (Coyne, Dennett, etc..), who has the illusion of free will (Harris), who has unreliable beliefs about reality (Plantinga), who has illusory perceptions of reality (Hoffman), who, since he has no real time empirical evidence substantiating his grandiose claims, must make up illusory “just so stories” with the illusory, and impotent, ‘designer substitute’ of natural selection (Behe, Gould, Sternberg), so as to ‘explain away’ the appearance (i.e. illusion) of design (Crick, Dawkins), and who must make up illusory meanings and purposes for his life since the reality of the nihilism inherent in his atheistic worldview is too much for him to bear (Weikart), and who must also hold morality to be subjective and illusory since he has rejected God (Craig, Kreeft).
    Bottom line, nothing is real in the atheist’s worldview, least of all, morality, meaning and purposes for life.,,,
    – Darwin’s Theory vs Falsification – video – 39:45 minute mark
    https://youtu.be/8rzw0JkuKuQ?t=2387

    Thus, although the Darwinist may firmly believes he is on the terra firma of science (in his appeal, even demand, for methodological naturalism), the fact of the matter is that, when examining the details of his materialistic/naturalistic worldview, it is found that Darwinists/Atheists are adrift in an ocean of fantasy and imagination with no discernible anchor for reality to grab on to.

    It would be hard to fathom a worldview more antagonistic to modern science than Atheistic materialism and/or methodological naturalism have turned out to be.

    2 Corinthians 10:5
    Casting down imaginations, and every high thing that exalteth itself against the knowledge of God, and bringing into captivity every thought to the obedience of Christ;

  88. 88
    bornagain77 says:

    Gp has, in a couple of instances now, tried to imply that I (and others) do not understand randomness. In regards to Shapiro Gp states,

    Moreover, the reference to “statistically significant non-random patterns” could simply point to some necessity effect that modifies the probability distribution, like in the case of the loaded dice. As explained, that does not make the system “non-random”. And that has nothing to do with guidance, design or creation.

    Might I suggest that it is Gp himself that does not understand randomness. As far as I can tell, Gp presupposes complete randomness within his model, (completely free from ‘loaded dice’), and is one of the main reasons that he states that he can think of no “other possible explanation” to explain the sequence data.. Yet, if ‘loaded dice’ are producing “statistically significant non-random patterns” within genomes then that, of course, falsifies Gp’s assumption of complete randomness in his model. Like I stated before ‘directed’ mutations, (and/or ‘loaded dice’ to use Gp’s term), are ‘another possible explanation’ that I can think of.

  89. 89
    gpuccio says:

    Bornagain77:

    OK, I think I will leave it at that with you. Even if you don’t.

  90. 90
    gpuccio says:

    To all:

    Of course, I will make the clarifications about transposons as soon as possible.

  91. 91
    john_a_designer says:

    Once again (along with others) thank you for a very interesting and evocative OP. On the other hand, as a mild criticism, I am just an uneducated layman when it comes to bio-chemistry so I am continuously trying to get up to speed on the topic. I think I get the gist of what you are saying but I imagine someone stumbling onto this site for the first time are going to find this topic way over their heads. Maybe something of a basic summary which briefly explains transcription, the role of RNA polymerase and the difference between prokaryotic and eukaryotic transcription would be helpful (or a link to such a summary if you done that somewhere else.)

    As for myself I think I get the gist of what you are saying but I am a little confused by differences between prokaryotic and eukaryotic transcription. (Most of my study and research has been centered on the prokaryote. If you can’t explain the natural selection + random variation evolution in prokaryotes it’s game over for Neo-Darwinism. There has to be another explanation.) For example, one question I have is, are there transcription factors for prokaryotes? According to Google, no.

    Eukaryotes have three types of RNA polymerases, I, II, and III, and prokaryotes only have one type. Eukaryotes form and initiation complex with the various transcription factors that dissociate after initiation is completed. There is no such structure seen in prokaryotes.

    Is that true? What about the Sigma factor which initiates transcription in prokaryotes and the Rho factor which terminates it? Isn’t that essentially what transcription factors which come in two forms, promoters and repressors, do in eukaryotic transcription? Are Sigma factors and Rho factors the same in all prokaryotes or is there a species difference?

    As far a termination in eukaryotes, one educational video I ran across recently (it’s dated to 2013) said that it is still unclear how termination occurs in eukaryotes. Is that true? In prokaryote there are two ways transcription is terminated: there is Rho dependent, where the Rho factor is utilized, and Rho independent, where it isn’t. Do we know anymore six years later?

    Hopefully answering those kinds of questions can help me and others. (Of course, they’re going to have to do some homework on their own.)

  92. 92
    bill cole says:

    Hi gpuccio
    Thanks for the interesting post. From my study cell control comes from the availability of transcription acting molecules in the nucleus. They can be either proteins or small molecules that are not transcribed but obtained by other sources like enzyme chains. Testosterone and estrogen are examples of non transcribed small molecules. How this is all coordinated so that a living organism can reliably operate is fascinating and I am thrilled to see you start this discussion. Great to have you back 🙂

  93. 93
    gpuccio says:

    John_a_designer:

    Thank you for your very thoughtful comment.

    Yes, in this OP and in others I have dealed mainly with eukaryotes. But of course you are right, prokaryotes are equally fascinating, maybe only a little bit simpler, and, as you say:

    “If you can’t explain the natural selection + random variation evolution in prokaryotes it’s game over for Neo-Darwinism. There has to be another explanation”.

    And game over it is, because the functional complexity in prokaryotes ia already overwhelming, and can never be explained by RV + NS.

    It’s not a case that the example I use probably most frequently is ATP synthase. And that is a bacterial protein.

    You describe very correctly the transcription system in prokaryotes. It’s certainly much simpler than in eukaryotes, but still ots complexity is mind-boggling.

    I think the system of TFs is essentially eukaryotic, but of course a strict regulation is present in prokaryotes too. You mention sigma factors and rho, of course, and there is the system of activators and repressors. But there are big differences, starting form the very different organization of the bacterial chromosome (histone independent supercoiling, and so on).

    Sigma factors are in some way the equivalent of generic TFs. According to Wikipedia, sigma factor “is a bacterial transcription initiation factor that enables specific binding of RNA polymerase (RNAP) to gene promoters. It is homologous to archaeal transcription factor B and to eukaryotic factor TFIIB”.

    Maybe. I have blasted sigma 70 from E. coli with human TFIIB and found no detectable homology (E value 1.4). So, there seems to be little conservation here.

    I have blasted the same E. coli sigma 70 against all bacteria, excluding proteobacteria (the phylum of E. coli). I would say that there is good conservarion in different types of bacteria, such as up to 1251 bits in firmicutes, 786 bits in actinobacteria, 533 bits in cyanobacteria, and so on. So, this molecule seems to be rather conserved in bacteria.

    I think that eukaryogenesis is one of the most astounding designed jumps in natural history. I do accept that mithocondria and plastids are derived from bacteria, and that some important eukaryotic features are mainly derived from archae, but even those partial derivations require tons of designed adjustments. And that is only the tip of the iceberg. Most eukaryotic features (the nuclear membrane and nuclear pore, chromatin organization, the system of TFs, the spliceosome, the ubiquitin system, and so on) are essentially eikaryotic, even if of course some vague precursor can be detected, in many cases, in prokaryotes. And each of these system is a marvel of original design.

  94. 94
    gpuccio says:

    Bill Cole:

    Great to hear from you! 🙂

    And let’s not forget lncRNAs (see comments #52, #67 and #68 here).

  95. 95
    Silver Asiatic says:

    GP

    design happens when the functional information is inputted into the material object we observe

    I don’t quite follow that. We create software that evaluates data and then produces functional information (visualizations). So, the design of that software happened when the visualization occurred? I think we normally say that the design occurred first in the mind of the software designer – it’s an idea (Mozart wrote symphonies entirely in his mind before putting on paper). Then, the designer creates algorithms that produce functional information. But the software is not the designer. It is the output of a designed process.

    what facts support the existence of such an independent physical algorithm in physical reality?

    Again, with Mozart. The orchestra plays the symphony. Does this mean that the symphony could only be created as an independent physical text in physical reality? The facts say no – he had it in his mind.

    I believe you are saying that a Designer enters into the world at various specific points of time, and intervenes in the life of organisms and creates mutations or functions at those moments. What facts support the existence of those interventions in time, versus the idea that the organism was designed with the capability and plan for various changes from the beginning of the universe? What evidence do we have of a designer directly intervening into biology?

    I only know that [the designer] designs biological things, and must be conscious, intelligent and purposeful.

    Well, I think we could try to infer more than that – or not? Is the designer a biological organism? Or did the designer exist before life on earth existed? Is the designer a physical entity? What facts show that an entity is capable of intervening physically inside of the functions of every cell of every living being on earth? How complex is the designer? More complex than the algorithms you mentioned? Does the designer move from one cell present in Southern California, USA and then travel to intervene in another cell in Tanzania? Or does the designer do such interventions simultaneously? In either answer, are there some facts that show what the designer does in these cases? If simultaneously, how big is the designer and what mechanisms are used to intervene simultaneously into billions of cells at various points of time? Does the designer decide to intervene minute-by-minute based on various criteria? Or are the interventions pre-planned. Does the designer use tools to carry out interventions? Or does he have appendages that enable him to tweak mutations (like with his fingers)?

  96. 96
    gpuccio says:

    To all:

    OK, now let’s talk briefly of transposons.

    It’s really strange that transposons have been mentioned here as a confutation of my ideas. But life is strange, as we all know.

    The simple fact is: I have been arguing here for years that transposons are probably the most important tool of intelligent design in biology. I remember that an interlocutor, some time ago, even accused me of inventing the “God of transposons”.

    The simple fact is: there are many facts that do suggest that transposon activity is responsible for generating new functional genes, new functional proteins. And I think that the best intepretation is that transposon activity can be intelligently directed, in some cases.

    IOWs, if biological design is, at least in part, implemented by guided mutations, those guided mutations are probably the result of guided transposon activity. We have no certainty of that, but it is a very reasonable scenario, according to known facts.

    OK, but let’s put that into perspective, especially in relation to the confused and confounding statements that have been made or reported here about “random mutations”.

    I will refer to the following interesting article:

    The Impact of Transposable Elements in Genome Evolution and Genetic Instability and Their Implications in Various Diseases

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4196381/

    So, the first question that we need to answer is:

    a) How frequent are transposon-dependent mutations in relation to all other mutations?

    There is an answer to that in the paper:

    Recent studies have revealed the implications of TEs in genomic instability and human genome evolution [44]. Mutations associated with TE insertions are well studied, and approximately 0.3% of all mutations are caused by retrotransposon insertions [27

    0.3% of all mutations. So, let’s admit for a moment that transposon derived mutations are not random, as it has been siggested in this thread. That would still leave 99.7% of all mutations that could be random. Indeed, that are random.

    But let’s go on. I have already stated that I believe that transposons are an important tool of design. Therefore, at least some of transposon activity must be intelligently guided.

    But does that mean that all transposon activity is guided? Of course, absolutely not.

    I do believ that most transposon activity is random, and is not guided. Let’s read again from the paper:

    Such insertions can be deleterious by disrupting the regulatory sequences of a gene. When a TE inserts within an exon, it may change the ORF, such that it codes for an aberrant peptide, or it may even cause missense or nonsense mutations. On the other hand, if it is inserted into an intronic region, it may cause an alternative splicing event by introducing novel splice sites, disrupting the canonical splice site, or introducing a polyadenylation signal [8, 9, 10, 11, 42, 43]. In some instances, TE insertion into intronic regions can cause mRNA destabilization, thereby reducing gene expression [45]. Similarly, some studies have suggested that TE insertion into the 5′ or 3′ region of a gene may alter its expression [46, 47, 48]. Thus, such a change in gene expression may, in turn, change the equilibrium of regulatory networks and result in disease conditions (reviewed in Konkel and Batzer [43]).

    The currently active non-LTR transposons, L1, SVA, and Alu, are reported to be the causative factors of many genetic disorders, such as hemophilia, Apert syndrome, familial hypercholesterolemia, and colon and breast cancer (Table 1) [8, 10, 11, 27]. Among the reported TE-mediated genetic disorders, X-linked diseases are more abundant than autosomal diseases [11, 27, 45], most of which are caused by L1 insertions. However, the phenomenon behind L1 and X-linked genetic disorders has not yet been revealed. The breast cancer 2 (BRCA2) gene, associated with breast and ovarian cancers, has been reported to be disrupted by multiple non-LTR TE insertions [9, 18, 49]. There are some reports that the same location of a gene may undergo multiple insertions (e.g., Alu and L1 insertions in the adenomatous polyposis coli gene) (Table 1).

    And so on.

    Have we any reason to believe that that kind of transposon activity is guided? Not at all. It just behaves like all other random mutations, that are oftne cause of genetic diseases.

    Moreover, we know that deleterious mutations are only a fraction of all mutations. Most mutations, indeed, are neutral or quasi neitral. Therefore, it is absolutely reasonable that most transposon induced mutations are neutral too.

    And the design?

    The important point, that can be connected to Abel’s important ideas, is that functional design happens when an intelligent agnet acts to give a functional (and absolutely unlikely) form to a number of “configurable switches”.

    Now, the key idea here is that the switches must be configurable. IOWs, if they are not set by the designer, their individual configuration is in some measure indifferent, and the global configuration can therefore be described as random.

    The important point here is that functional sequences are more similar to random sequences than to ordered sequences. Ordered sequences cannot convey the functional information for complex function, because they are constrained by their order. Functional sequences, instead, are pseudo-random (not completely, of course: some order can be detected, as we know well). That relative freedom of variation is a very good foundation to use them in a designed way.

    So, the idea is: transposon activity is probably random in most cases. In some cases, it is guided. Pribably through some qunatum interface.

    That’s also the reason why a quantum interface is usually considered (by me too) as the best interface between mind and matter: because quantum phenomena are, at one level, probabilistic, random, and that’s exactly the reason why they can be used to implement free intelligent choices.

    To conclude, I will repeat, for the nth time, that a system is a random system when we cannot describe it deterministically, but we can proved some relatively efficient and useful description of it using a probability distribution.

    There is no such a thing as “complete randomness”. If we use a probability distributuon to describe a system, we are treating that system as a random system.

    Randomness is not an intrinsic property of evens (except maybe at quantum level). A random syste, like the tossing of a coin, is completely deterministic in essence. But we are not able to describe it deterministically.

    In the same way, random systems that do not follow an uniform distribution are random just the same. A loaded dice is as rando as a fair dice. But, if the laoding is so extreme that only one event can take place, that becomes a necessity system, that can very well be described deterministically.

    In the same way, there is nothing strange in the fact that some factrs, acting as necessity causes, can modify a probability distribution. As a random system is in reality deterministic in essence, of course is some of the variables that act in it is strong enought to be detected, that variable wil modify the probability dostribution in a detectable way. There is nothing strange in that. The system is stil random (we use a probabiliti dostribution to describe it), but we can detect one specific variable that modifies the probability distribution (what has been called here, not so precisely IMO, a bias). That’s the case, for example, of radiations increasing the rate and modifying the type of random mutations, as in the great incread of leukemia cases at Hiroshima after the bomb. That has always been well known, even is some people seem to discover it only now.

    In all those cases, we are still dealing with random systems: systems where each single event cannot be anticipated, but a probability distribution can rather efficiently describe the system. Mutations are a random system, except maybe for the rare cases of guded mutations in the coruse of biological design.

    Finally, let me say that, of all the things of which I have been accused, “assuming Methodologcal Naturalism as a starting assumption” is probably the funniest. Next time, they will probably accuse me of being a convinced compatibilist! 🙂

    Life is strange.

  97. 97
    gpuccio says:

    Silver Asiatic:

    “I don’t quite follow that. We create software that evaluates data and then produces functional information (visualizations). So, the design of that software happened when the visualization occurred? I think we normally say that the design occurred first in the mind of the software designer – it’s an idea (Mozart wrote symphonies entirely in his mind before putting on paper). Then, the designer creates algorithms that produce functional information. But the software is not the designer. It is the output of a designed process.”

    I perfectly agree. The designed object here is the software. The design happens when the designer writes the software, from his mind.

    I see your problem. let’s be clear. The software never designs anything, because it is not conscious. Design, by definition, is the output of form from consiousness to a materila object.

    But you seem to believe that the siftware creates new functional information. Well, it does in a measure, but it is not new complex functional information. this is a point that is often misunderstood.

    Let’s say that the software produces visualizations exactly as programmed to do. In that case, it is easy. All the functional information that we get has been designed when the software was designed.

    But maybe the software makes computation whose result was not previously known to the designer. that deos not change anything, The computation process has been designed anyway. And computations are algorithmic, they do not increase the Kolmogorov complexity of the system. And that complexity is the functional complexity.

    Finally, maybe the software uses new information from the environment. In that case, there will be some increse in functional information, but it will be very low, if the environment does not contain complex functional information. IOWs, the environment cannot teach a system how to build ATP synthase, except when the sequence of ATP syntghase (or, for that, of a Shakespeare sonnet in the case of language) is provided externally to the system.

    Now I must go. More in next post.

  98. 98
    Silver Asiatic says:

    GP
    Good answer, thank you.

    But maybe the software makes computation whose result was not previously known to the designer. that deos not change anything, The computation process has been designed anyway. And computations are algorithmic, they do not increase the Kolmogorov complexity of the system. And that complexity is the functional complexity.

    Yes, but I think this answers your question about a Designer who created algorithms. In a software output, it can be programmed to create information that was not known to the designer. That information actually causes other things to happen. I would think that it is the definition of complex, specified, functional information. We observe the software creating that information, and rightly infer that the information network (process) was designed. But do we, or can we know that the designer was unaware of what the software produced?
    I don’t think so. We do not have access to the designer’s mind. We only see the software and what it produces. We know it is the product of design. But we do not know if the functional information was designed for any specific instance, or if it is the output of a previous design farther back, invisible to us.
    This, I think, is the case in biology.
    I believe you are saying that the design occurs at various discrete moments where a designer intervenes, and not that the design occurred at some distant time in the past and is merely being worked out by “software”. What we observe shows functional information, but this information may either be created directly by the designer at the moment, or it may be an output of a designed system.
    I do not see how we could distinguish between the two options.
    With software, we can observe the inputs and calculations and we can determine that the software created something “new”‘. It is all the output of design, but we can trace what the software is doing and therefore infer where the “design implementation” took place.
    It’s that term that is the issue here, really.
    It is “design implementation”. Where and when was the design (in the mind of the designer) put into biology?
    I do not believe that is a question that ID proposes an answer for, and I also do not believe it is a scientific question.

  99. 99
    bornagain77 says:

    Gp states,

    “That would still leave 99.7% of all mutations that could be random. Indeed, that are random.”

    LOL, just can’t accept the obvious can he? Bigger men than you have gone to their deaths defending their false theories Gp. 🙂

    “It is difficult (if not impossible) to find a genome change operator that is truly random in its action within the DNA of the cell where it works. All careful studies of mutagenesis find statistically significant non-random patterns”
    James Shapiro – Evolution: A View From The 21st Century – (Page 82)

    To presuppose that the intricate molecular machinery in the cell is just willy nilly moving stuff around on the genome is absurd on its face. And yet that is ultimately what Gp is trying to argue for.

    Of note: It is not on me to prove a system is completely deterministic in order to falsify Gp’s model. I only have to prove that it is not completely random in order to falsify his model. And that threshold has been met.

    Perhaps Gp would also now like to still defend the notion that most (+90%) of the genome is junk?

  100. 100
    gpuccio says:

    Silver Asiatic:

    It’s not really question of knowing what is in the mind of the designer. The problem is: what is in material objects?

    Let’s go back to ATP synthase. Please, read my comment #74.

    So, I think we can agree that any algorithm that can compute the sequences for ATP synthase would be, by far, more complex than ATP synthase itself.

    So, let’s say, just for as moment, that the designer does not design ATP synthase directly. Let’s say that the designer designs the algorithm. After all, he is clever enough.

    So, he designs the algorithm. But, of course, he must implement it is a material object. A material object that can do the computations and then build the compute outcome (IOWs, ATP synthase).

    OK, so my simple question is: where is, or was, that object? The computing object?

    I am aware of nothing like that in the known universe.

    Maybe it existed 4 billion years ago, and now it is lost?

    Well, everything is possible, but what facts support such an idea?

    None at all. Have we traces of that algorithm, indications of how it worked? Have we any idea of the object where it was implemented? It seems reasonable that it was some biologcial object, probably an organism. So, what are we hypothesizing, that 4 billion years ago the designer designed and implemented some extremely complex organism capable of computing ATP synthase, only to compute ATP syntase for bacteria, and that such a complex organism then disappeared without leaving any trace of itself?

    What’s the sense of such a scenario? What scientific value has it? The answer is simple: none.

    Of course, the designer designed ATP synthase when it was needed, and not some mysterious algorithm, never seen, to compute its information.

    And there is more: such a complex algorithm, made to compute ATP synthase, could not certainly compute another, completely different, protein system, like for example the spliceosome. Because that’s another function, another plan. A completely different computation would be needed, a different purpose, a different context.

    So, what do we believe? That the designer designed, later, another complex organism with another comples algorithm to compute and realize the spliceosome? And the immune system? And our brain?

    Or that, in the beginning, there wa one organism so complex that it could compute the sequence of all future necessary proteins, protein systems, lncRNAs, and so on? A monster of which no trace has remained?

    OK, I hope that’s enough.

  101. 101
    gpuccio says:

    Silver Asiatic::

    You also say:

    “What evidence do we have of a designer directly intervening into biology?”

    That’s rather simple. The many examples, well known, of sudden appearance in natural history of new biological objects full of tons of new complex functional imformation, information that did not exist at all before.

    For example, I have analyzed quantitatively the transition to vertebrates, which happened more than 400 million years ago, in a time window of probably 20 million years, and which involved the appearance, for the first time in natural history, of about 1.7 million bits of new functiona information. Information that, after that time, has been conserved up to now.

    This is the evidence of a design intevrentio, specifically localized in time.

    Of course, there is instead no ecidence at all that the organisms that existed before included any complex algorithm capable of computing those 1.7 million bits of functional information.

    You say:

    “Well, I think we could try to infer more than that – or not? Is the designer a biological organism? Or did the designer exist before life on earth existed? Is the designer a physical entity? What facts show that an entity is capable of intervening physically inside of the functions of every cell of every living being on earth? How complex is the designer? More complex than the algorithms you mentioned? Does the designer move from one cell present in Southern California, USA and then travel to intervene in another cell in Tanzania? Or does the designer do such interventions simultaneously? In either answer, are there some facts that show what the designer does in these cases? If simultaneously, how big is the designer and what mechanisms are used to intervene simultaneously into billions of cells at various points of time? Does the designer decide to intervene minute-by-minute based on various criteria? Or are the interventions pre-planned. Does the designer use tools to carry out interventions? Or does he have appendages that enable him to tweak mutations (like with his fingers)?”

    These are good questions. To many of them, we cannot at present give answers. But not all.

    “Is the designer a biological organism? Is the designer a physical entity?”

    I will answer these two together. While we cannot say who or what the designer (or designers) is, I find very reasonable that he should not be a physical organism. The reason for that is, again, empirical, and is similar to my “confutation” of the imaginary algorithm: if one or more physical designers had been acting on our planet throughout natural history, we should have some evidence of that. But there is none. So the best hypothesis is that the designer or designers are not physical like us.

    “Did the designer exist before life on earth existed?”

    This is easy. A designer was certainly responsible for OOL on our planet. OOL is of course one of the events that scream design with the highest strength. So the answer is yes: the designer, or at least the designer who designed life on our planet, certainly existed before.

    “What facts show that an entity is capable of intervening physically inside of the functions of every cell of every living being on earth?”

    Well, we humans, as conscious beings, are entities capable of intervening inside the functions of most cells in our brain or nervous system, adn amny in our bodies. That’s how our consiousness is interfaced to our body.

    Why shouldn’t some other conscious entity be able to do something similar with biological organisms? And again, there is no need that the interface reach all cells of all organisms. The strict requirement is for those organisms where the design takes place.

    “How complex is the designer? ”

    We don’t know. How complex is our consciousness, if separated from our body? We don’t know how complex non physical entities need to be. Maybe the designer is very simple. Or not.

    This answer is valid for many other questions: we don’t understand, at present, how consciousness can work outside of a physical body. Maybe we will understand more in the future.

    “Does the designer decide to intervene minute-by-minute based on various criteria? Or are the interventions pre-planned.”

    I don’t know when or how the designer decides things. But I know when he does things. For example, he introduced the functional information for vertebrates, all those 1.7 million bits, in some pre-existing organism (probably the first chordates), approximately in those 20 million years when vertebrates appear on earth.

    “Does the designer use tools to carry out interventions? Or does he have appendages that enable him to tweak mutations (like with his fingers)?”

    Most likely he uses tools. Of course the designer’s consciousness needs to interface with matter, otherwise no design could be possible. That is exactly what we do when our consciousness interfaces with our brain. So, no big problem here.

    The interface is probably at quantum level, as it is probably in our brains. There are many events in cells that could be more easily tweaked at quantum level in a consciousness related way. Penrose believes that a strict relationship exists in our brain between consciousness and microtubules in neurons. Maybe.

    I think, as I have said many times, that the most likely tool of design that we can identify at present are transposons. The insertions of transposons, usually random (see my previous posts), could be easily tweaked at quantum level by some conscious intervention. And there is some good evidence that transposons are involved in the generation of new functional genes, even in primates.

    That’s the best I can do to answer your questions. Just a note: my answers here are highly tentative, but they are strictly empirical. They do not derive from any theological agenda. These are the ideas that, in my opinion, are more reasonable to explain the facts as we know them. Right or wrong that they may be, this is the spirit in which I express them.

  102. 102
    PeterA says:

    GP,
    The first graphic illustration shows the mechanism of NF-kB action, which you associated with the canonical activation pathway “summarized” in figure 1.
    The figure 1 -without breaking it into more details- could qualify as a complex mechanism.
    Is it possible that such an explicit graphic illustration, which includes so many details, is a simplification of the real thing? Aren’t all the control procedures associated with this mechanism shown in the figure? Are any important details missing, or just irrelevant details?
    Well, you answered those question when you elaborated on those details in the OP.
    In this particular example, we first see the “signals” shown in figure 1 under the OP section “The stimuli”.
    Thus, what in the figure 1 appears as a few colored objects and arrows is described in more details, showing the tremendous complexity of each step of the graphic, specially the receptors in the cell membrane.
    Can the same be said about every step within the figure?

  103. 103

    .

    Luckily, some friends are ready to be fiercely antagonistic!

    Yes, I see that.

    Illuminating thread otherwise.

  104. 104
    pw says:

    GP,

    Fascinating topic and interesting discussion, though sometimes unnecessarily personal. Scientific discussions should remain calm, focused in on details, unbiased. At the end we want to understand more. Undoubtedly biology today is not easy to understand well in all details and it doesn’t look like it could get easier anytime soon.

    Someone asked:

    “What evidence do we have of a designer directly intervening into biology?”

    Could the answer include the following issues?

    OOL, prokaryotes, eukaryote, and according to Dr Behe, who said that at one point he would point to the class level, now he would focus it on at least at the family level, where the Darwinian paradigm lacks explanatory power for the physiological differences between cats and dogs allegedly proceeding from a common ancestor.

    You have pointed to the intentional insertion of transposable elements into the genetic code asanother empirical evidence. I think you’ve also mentioned the splicing mechanisms. Perhaps any of the complex functional mechanisms that appeared at some points could be attributed to conscious intentional design?

    Does CD stand for common design or common descent with designed modifications?
    Does “common” relate to the observed similarities ?

    For example, in the case of cats and dogs, “common” relates to their observed anatomical and/or physiological similarities, which were mostly designed too?

  105. 105
    gpuccio says:

    Upright BiPed:

    “Illuminating thread otherwise.”

    Thank you! 🙂

  106. 106
    gpuccio says:

    PeterA:

    “Is it possible that such an explicit graphic illustration, which includes so many details, is a simplification of the real thing?”

    Of course it is. A gross simplification. many important details are missing.

    For example:

    Only two kinds of generic signals and receptors are shown. As we have seen, there are a lot of different specific receptors.

    The pathways that connect each specific type of receptor to IKK are not shown (they are shown as simple arrows). But they are very complex and specific. I have given some limited information in the OP and in the discussion.

    Only the canonical pathway is shown.

    Only the most common type of dimer is shown.

    Coactivators and interactions with other pathways are not shown or barely mentioned.

    Of course, lncRNAs are not shown.

    And so on.

    Of course, the figure is there just to give a first general idea of the system.

  107. 107
    gpuccio says:

    Pw:

    “Could the answer include the following issues?”

    Yes, of course.

    “You have pointed to the intentional insertion of transposable elements into the genetic code asanother empirical evidence. I think you’ve also mentioned the splicing mechanisms. Perhaps any of the complex functional mechanisms that appeared at some points could be attributed to conscious intentional design?”

    All of them, if they are functionally complex. That’s the theory. That’s ID. The procedure, if correctly applied, should have no false positives.

    “Does CD stand for common design or common descent with designed modifications?”

    CD stands just for “common descent”. I suppose that each person can add his personal connotations. Possibly making them explicit in the discussion.

    I have explained that for me common descent just means a physical continuity between organisms, but that all new complex functional information is certainly designed. Without exceptions.

    So, I suppose that “common descent with designed modifications” is a good way to put it.

    Just a note about the universality. Facts are very strong in supporting common descent (in the sense I have specified). It remains open, IMO, if it is really universal: IOWs, if all forms of life have some continuity with a single original event of OOL, or if more than one event of OOL took place. I think that at present universality seems more likely, but I am not really sure. I think the question remains open. For example, some differences between bacteria and archea are rather amazing.

    “Does “common” relate to the observed similarities ?”

    Common, in my version of CD, refers to the physical derivation (for existing information) from one common ancestor. So, let’s say that at some time there was in the ocean a common ancestor of vertebrates: maybe some form of chordate. And at some time, vertebrates are already split into cartilaginous fish and bony fish. If both cartilaginous fish and bony fish physically reuse the same old information from a common ancestor, that is common descent, even of course all the new information is added by specific design.

    I really don’t understand how that could be explained without any form of physical descent. Do they really believe that cartilaginous fish were designed from scratch, from inanimate matter, and that bony fish were too designed from scratch, from inanimate matter, but separately? And that the supposed ancestor, the first chordates, were also designed from scratch? And the first eukaryotes? And so on?

  108. 108
    gpuccio says:

    PeterA:

    Maybe you can look at this more detailed figure for the different stimuli, receptors and receptor connections to the activation pathway:

    https://rockland-inc.com/nfkb-signaling-pathway.aspx

  109. 109
    ET says:

    gpuccio:

    Of course, the designer designed ATP synthase when it was needed, and not some mysterious algorithm, never seen, to compute its information.

    The Designer is never seen.

    The point of the algorithm was to address the “how” the Intelligent Designer designed living organisms and their complex parts and systems. The way ATP synthase works, by squeezing the added “P” onto ADP and not by some chemical reaction, is a clue- for me, anyway. It just seems like something an algorithm would tease out- and that comes from knowledge of many GA’s that have created human inventions.

    That would still leave 99.7% of all mutations that could be random. Indeed, that are random.

    I would love to see how you made that determination, especially in the light of the following:

    He [the Designer] indeed seems to have “carefully crafted” information in His species giving them the ability to respond to environmental stimuli to alter their own genome to adapt to new environments. He then evidently let them wander where they will with the ability to adapt.- Dr. Lee Spetner “the Evolution Revolution” p 108

  110. 110
    Silver Asiatic says:

    Gpuccio
    Thank you for your detailed replies on some complex questions. You explained your thoughts very clearly and well.

    Of course, there is instead no evidence at all that the organisms that existed before included any complex algorithm capable of computing those 1.7 million bits of functional information.

    I think what science can show is that 1.7 million bits of FI appear. What is not shown is how they appeared there. Regarding a complex algorithm, the designing-mind itself had to have immense capabilities. Algorithms are programmatic functions which start in a mind. Organisms could have been programmed to trigger innovations over time.

    While we cannot say who or what the designer (or designers) is, I find very reasonable that he should not be a physical organism. The reason for that is, again, empirical, and is similar to my “confutation” of the imaginary algorithm: if one or more physical designers had been acting on our planet throughout natural history, we should have some evidence of that. But there is none. So the best hypothesis is that the designer or designers are not physical like us.

    Here is where it starts to get difficult. On the same basis that we say that there is no evidence of physical designers, we have to say there is no evidence if immaterial designers. Science cannot evaluate immaterial entities. So, our speculations here take us outside of science. I don’t think we can say that we have empirical evidence of immaterial entities or beings. The absence of evidence (in this case of physical designers) does not mean that we have direct evidence of immaterial designers.

    This is easy. A designer was certainly responsible for OOL on our planet. OOL is of course one of the events that scream design with the highest strength. So the answer is yes: the designer, or at least the designer who designed life on our planet, certainly existed before.

    That is good. We do not know if there is one or multiple designers, or if the designer of life is the same as the one who developed and changed life. But some designing intelligence existed before life on earth did. That designer would not be a terrestrial, biological entity.

    Well, we humans, as conscious beings, are entities capable of intervening inside the functions of most cells in our brain or nervous system, adn amny in our bodies. That’s how our consiousness is interfaced to our body.
    Why shouldn’t some other conscious entity be able to do something similar with biological organisms?

    I don’t think we have any direct, scientific experience with an immaterial, pre-biological conscious entity. Additionally, we do not see that human consciousness can create life, for example, or that it could mutate bacteria to create plants, birds, fish, animals and other humans. We don’t see that human consciousness can intervene and create other consciousnesses. We might say that the entire population of human beings has affected the earth – would this suggest that there is a huge population of designers affecting mutations?

    And again, there is no need that the interface reach all cells of all organisms. The strict requirement is for those organisms where the design takes place.

    I’d think that the activity of mutations within organisms is such that a continual monitoring would be required in order to achieve designed effects, but perhaps not. Even if it is only cells where there were innovations that seems to be quite a lot of intervention.

    We don’t know. How complex is our consciousness, if separated from our body? We don’t know how complex non physical entities need to be. Maybe the designer is very simple. Or not.

    I think this cuts against your concern about complex algorithms. The designer may be very complex. Algorithms created by the designer may be complex also. Additionally, I do not think that science has established that human consciousness is a non-physical entity, or that human consciousness can exist separated from a body.

    I don’t know when or how the designer decides things. But I know when he does things. For example, he introduced the functional information for vertebrates, all those 1.7 million bits, in some pre-existing organism (probably the first chordates), approximately in those 20 million years when vertebrates appear on earth.

    The options I see for this introduction of information are:
    1. Direct creation of vertibrates
    2. Guided or tweaked mutations
    3. Pre-programmed innovations that were triggered by various criteria
    4. Mutation rates are not constant but can be accelerated at times
    5. We don’t know

  111. 111
    Silver Asiatic says:

    GP

    So, I think we can agree that any algorithm that can compute the sequences for ATP synthase would be, by far, more complex than ATP synthase itself.

    I think we have to say that we do not know. As previously, you stated that we do not know how complex the designer is. An algorithm is method of calculation which would be resident in the mind of the designer. The level of complexity of that algorithm, for a designer capable of creating life on earth, does not seem to be a problem.

    So, he designs the algorithm. But, of course, he must implement it is a material object. A material object that can do the computations and then build the compute outcome (IOWs, ATP synthase).

    The algorithm could be computed by an immaterial entity. The designer, I think you’re saying, created immaterial consciousnesses (human) so could create immaterial algorithms that programmed life from the beginning. So, there would be one single Design act, and then everything after that is an output.

    OK, so my simple question is: where is, or was, that object? The computing object?
    I am aware of nothing like that in the known universe.

    If the computing agent is immaterial then you could have no scientific evidence of it.

    So, what are we hypothesizing, that 4 billion years ago the designer designed and implemented some extremely complex organism capable of computing ATP synthase, only to compute ATP syntase for bacteria, and that such a complex organism then disappeared without leaving any trace of itself?

    I think we are saying that science cannot know this. Additionally, you refer to “the designer” but there could be millions of designers. Again, science cannot make a statement on that.

    What’s the sense of such a scenario? What scientific value has it? The answer is simple: none.

    You propose an immaterial designer — it is subject to conditions of space and time? In any case, that proposal can have no scientific value. Science cannot directly investigate immaterial entities. Science can look at effects of entities, but cannot evaluate them.

    Of course, the designer designed ATP synthase when it was needed, and not some mysterious algorithm, never seen, to compute its information.

    I don’t think that conclusion is obvious. Why did the design have to occur when needed and not before. And again the algorithm could have been administered by an immaterial agent, which we never could observe scientifically. There’s no way for science to know this.

  112. 112
    gpuccio says:

    ET at #109:

    The Designer is never seen.

    Correct. But, as I have said, the designer needs not be physical. I believe that consciousness can exist without being necessarily connected to a physical body. I have explained at #101 (to SilverAsiatic). I quote myself:

    “Is the designer a biological organism? Is the designer a physical entity?”

    I will answer these two together. While we cannot say who or what the designer (or designers) is, I find very reasonable that he should not be a physical organism. The reason for that is, again, empirical, and is similar to my “confutation” of the imaginary algorithm: if one or more physical designers had been acting on our planet throughout natural history, we should have some evidence of that. But there is none. So the best hypothesis is that the designer or designers are not physical like us.

    An algorithm, instead, needs to be physically instantiated. An algorithm is not a conscious agent. It works like a machine. It need a physical “body” to exist and work.

    The way ATP synthase works, by squeezing the added “P” onto ADP and not by some chemical reaction, is a clue- for me, anyway. It just seems like something an algorithm would tease out- and that comes from knowledge of many GA’s that have created human inventions.

    ATP synthase squeezes the P using mechanical force from a proton gradient. It works like a water mill. Do you really believe that any generic algorithm would design such a thing, if the designer does not code the idea and a lot of details in the algorithm itsef?

    Algorithms compute, and do nothing else. They are sophisticated abacuses, nothing more. The amazing things that they do are simply due to the specific cponfigurations designed for them by conscious intelligent beings.

    Maybe the designer needed some algorithm to do the computations, if his computing ability is limited, like ours. Maybe not. But, if he used some algorithm, it seems not to have happened on this planet, ot he accurately destroyed any trace of it. Don’t you think that these are just ad hoc reasonings?

    I would love to see how you made that determination, especially in the light of the following:

    I am not aware that waht Spetner says is true by default. Again, I don’t know his thought in detail, and I don’t want to judge.

    But there are a lot of facts that tell us that most mutations are random, neutral or deleterious. I have mentioned the many human diseases causes by mutations that follow no specific pattern, botn normal mutations and transposon associated. See comments #64 and #96.

    The always precious Behe has clearly shown that differentiation at low level (let’s say inside families) is just a matter of adaptation thorugh loss of information, never a generation of new functional information. To be clear, the loss of information is random, due to deleterious mutations, and the adaptation id favoured by an occasionl advantage gained in specific environments, therefore to NS. This is the level where the neo-darwinian model works. But without generating any new functional information. Just by losing part of it. This is Behe’s model (see polar bears). And it is mine, too.

    For the rest, actual design is always needed.

  113. 113
    ET says:

    gpuccio:

    Do you really believe that any generic algorithm would design such a thing, if the designer does not code the idea and a lot of details in the algorithm itsef?

    I don’t see any issues with it. There is a Scientific America article from over a decade ago titled “Evolving Inventions”. One invention had a transistor in it that did not have its output connected to anything. The point being is the only details required are what is needed to get the job done, ie connecting a “P” to ADP.

    But there are a lot of facts that tell us that most mutations are random, neutral or deleterious. I have mentioned the many human diseases causes by mutations that follow no specific pattern, botn normal mutations and transposon associated.

    And for every genetic disease there are probably thousands of changes that do not cause one.

  114. 114
    gpuccio says:

    Silver Asiatic at #110:

    I think what science can show is that 1.7 million bits of FI appear. What is not shown is how they appeared there.

    Well, when you have facts science has to propose hygpotheses to explain them. Neo-darwinism is one hypothesis, and it does not explain what it should explain. Design is another hypothesis. You can’t just say: it happened, and not try to explain it. That’s not science.

    Regarding a complex algorithm, the designing-mind itself had to have immense capabilities. Algorithms are programmatic functions which start in a mind. Organisms could have been programmed to trigger innovations over time.

    Everything is possible. But my points are:

    a) There is no trace of those algorithms. They are just figments of the imagination.

    b) There are severe limits to what an algorithm can do. An algorithm cannot find solutions to problems for which it has not been programmed to find solutions. An algorithm just computes. Only consciousness has cognitive representations, understanding and purpose.

    Regarding innovations, I am afraid they are limited to what Behe describes, plus maybe some limited cases of simple computational adaptation. Innovations exist, but they are always simple.

    Here is where it starts to get difficult. On the same basis that we say that there is no evidence of physical designers, we have to say there is no evidence if immaterial designers. Science cannot evaluate immaterial entities. So, our speculations here take us outside of science. I don’t think we can say that we have empirical evidence of immaterial entities or beings. The absence of evidence (in this case of physical designers) does not mean that we have direct evidence of immaterial designers.

    I strongly disagree. Here you are indeed assuming methodologic naturalism, something that I consider truly bad philosophy of science (even if I have been recently accused of doing exactly that).

    Science can investigate anything that produces observable facts. In no way it is limited to “matter”. Indeed, many of the most important concepts in science have nothing to do with matter. Abd science does debate ideas and realities about which we still have no clear understanding, see dark matter and especially dark energy. Why, Because those things, whatever they may be, seem to have detectable effects, to generate facts.

    Moreover, consciousness is in itself a fact. It is subjectively perceived by each of us (you too, I suppose). Therefore is can and must be investigated by science, Even if, at present, science has no clear theory about what consciousness is.

    Design is an effect of consciousness. There is no evidence that consciousness need to be physical. Indeed, there are good evidences of the constrary, but I will not discuss them now.

    However, design, functional information and consciousness are certainly facts that need to be investigated by science. Even if the best explanation, maybe the only one, is the intervention of some non physical conscious agent.

    That is good. We do not know if there is one or multiple designers, or if the designer of life is the same as the one who developed and changed life. But some designing intelligence existed before life on earth did.

    Correct.

    That designer would not be a terrestrial, biological entity.

    Not physical, therefore not biological. Terrestrial? I don’t know. a non physical entity could well, in principle be specially connected to out planet. Or not, of course. If we don’t know we don’t know.

    I don’t think we have any direct, scientific experience with an immaterial, pre-biological conscious entity. Additionally, we do not see that human consciousness can create life, for example, or that it could mutate bacteria to create plants, birds, fish, animals and other humans. We don’t see that human consciousness can intervene and create other consciousnesses. We might say that the entire population of human beings has affected the earth – would this suggest that there is a huge population of designers affecting mutations?

    You seem to make some confusion about three different concepts: functional information, life and consciousness.

    ID is about the origin of functional information, in particular the functional information we observe in living organisms. It can say nothing about what life and consciousness are, least of all about how to generate those things.

    Functional information is a configuration of material objects to implement some function in the world we observe. Nothing else. Complex functional information originates only from conscious agents (we know that empirically), but it tells us nothing about what consciousness is or how it is generated. And life itself cannot easily be defined, and it is probably more than the information it needs to exist.

    As humans, we can design functional information. We can also design biological functional information, even rather complex. OK, we are not really very good. We cannot design anything like ATP synthase. But, in time, we can improve.

    Designers can design complex functional information. More or less complex, good or bad. But they can do it. But human designers, at present, cannot generate life. Indeed. we don’t even know what life is. Even more that is true of consciousness.

    And again, I don’t think we can say how many designers have contributed to biological design. Period.

    Even if it is only cells where there were innovations that seems to be quite a lot of intervention.

    It is a lot of intervention. And so?

    I think this cuts against your concern about complex algorithms. The designer may be very complex. Algorithms created by the designer may be complex also.

    He could also be very simple.

    I do not think that science has established that human consciousness is a non-physical entity, or that human consciousness can exist separated from a body.

    Science has established practically nothing about the nature of consciousness. But there is time. Certainly, it has not established that consciousness derives fron the physical body.

    The options I see for this introduction of information are:
    1. Direct creation of vertibrates
    2. Guided or tweaked mutations
    3. Pre-programmed innovations that were triggered by various criteria
    4. Mutation rates are not constant but can be accelerated at times
    5. We don’t know

    5 is true enough, but after that 2 is the only reasonable hypothesis. Intelligent selection can have a role too, of course, like in human protein engineering. But I think that transposons act as a form of guided mutation.

  115. 115
    bornagain77 says:

    Gp states, ” I think that at present universality seems more likely, but I am not really sure. I think the question remains open.”

    Thank you very much for at least admitting that degree of humility on your part.

  116. 116
    gpuccio says:

    Silver Asiatic at #111

    I think we have to say that we do not know. As previously, you stated that we do not know how complex the designer is. An algorithm is method of calculation which would be resident in the mind of the designer. The level of complexity of that algorithm, for a designer capable of creating life on earth, does not seem to be a problem.

    I disagree. Algorithms, as I have already explained, are configurations of material objects. We were discussing algorithms on our planet, not imaginary algorithms in the mind of a conscious agent of whom we know almost nothing.

    My statement was about a real alforithm really implemented in material objects. To compute ATP synthase, that algorithm would certainly be much more complex than ATP stnthase itself.

    But all these reasonings are silly. We have no example of algorithms in nature, even in the biological world, which do compute new complex functional objects. Must we still waste our time with fairy tales?

    The algorithm could be computed by an immaterial entity. The designer, I think you’re saying, created immaterial consciousnesses (human) so could create immaterial algorithms that programmed life from the beginning. So, there would be one single Design act, and then everything after that is an output.

    OK, I hope it’s clear that this is the theory I am criticizing. Certainly not mine.

    And I have never said, or discussed, that “The designer created immaterial consciousnesses (human)”. As said, ID can say nothing about the nature of consciousness. ID just says that functional information derives from consciousness. And the designer needs not have “created” anything. Design is not creation.

    The designer designs biological information. Not human consciousness, or any other consciousness, Not “immaterial algorithms”. design is the configuration of material objects, starting from cosncious representations of the designer. As said so many times.

    If the computing agent is immaterial then you could have no scientific evidence of it.

    Not true, as said. Immaterial realities that cause observable facts can be inferred from those facts.

    Instead, a physical algorithm existing on our planet should leave some trace of its physical existence. This was my simple point.

    You propose an immaterial designer — it is subject to conditions of space and time? In any case, that proposal can have no scientific value. Science cannot directly investigate immaterial entities. Science can look at effects of entities, but cannot evaluate them.

    Not having a physical body does not necessarily mean that an entity is not subject to space and time. The interventions of the designer on matter are certainly subject to those things.

    About science, I have already answered. Indeed, ID is not evaluating anything about the designer, except for what can be inferred by the observable effects of his interventions.

  117. 117
    gpuccio says:

    ET at #113:

    I don’t see any issues with it.

    Well, I do. Let’s say that we have different ideas about that.

    And for every genetic disease there are probably thousands of changes that do not cause one.

    Of course. And they are called neutral or quasi neutral random mutations. When they are present in more than 1% of the whole population, they are called polymorphisms.

  118. 118
    gpuccio says:

    PeterA and all:

    An interesting example of complexity is the CBM signalosome. As said briefly in the OP, it is a protein complex made of three proteins:

    CARD11 (Q9BXL7): 1154 AAs in the human form. Also known as CARMA1.
    BCL10 (O95999): 233 AAs in the human form.
    MALT1 (Q9UDY8): 824 AAs in the human form.

    These three proteins have the central role in transferring the signal from the specific immune receptors in B cells (BCR) and T cells (TCR) to the NF-kB activation system (see Fig. 3 in the OP).

    IOWs, they signal the recognition of an antigen by the specific receptors on B or T cells, and start the adaptive immune response. A very big task.

    The interesting part is that those proteins practically appear in vertebrates, because the adaptive immune system starts in jawed fishes.

    So, I have made the usual analysis for the information jump in vertebrates of these three proteins. Here are the results, that are rather impressing, especially for CARD11:

    CARD11: absolute jump in bits: 1280; in bits per aminoacid (bpa): 1.109185

    BCL10: absolute jump in bits: 165.1; in bits per aminoacid (bpa): 0.7085837

    MALT1: absolute jump in bits: 554; in bits per aminoacid (bpa): 0.6723301

    I am adding to the OP a graphic that shows the evolutionary history of those three proteins, in terms of human conserved information.

  119. 119
    EugeneS says:

    GP (101)

    “…we should have some evidence of that. But there is none.”
    This is where you lost me. Isn’t what you so painstakingly analyse here and in other OPs something that constitutes the said evidence? Maybe I am wrong and I have missed out part of the conversation. But it is exactly what we observe that strongly suggests design. It is precisely that. All the rest is immaterial. Consequently, it must be the evidence that you are saying does not exist. I hope I am just misinterpreting what you said there.

  120. 120
    gpuccio says:

    EugeneS:

    The statement was:

    ““Is the designer a biological organism? Is the designer a physical entity?”

    I will answer these two together. While we cannot say who or what the designer (or designers) is, I find very reasonable that he should not be a physical organism. The reason for that is, again, empirical, and is similar to my “confutation” of the imaginary algorithm: if one or more physical designers had been acting on our planet throughout natural history, we should have some evidence of that. But there is none. So the best hypothesis is that the designer or designers are not physical like us.”

    What I mean is that the continuing presence of one or more physical designers, with some physical body, should have left some trace, reasonably. A physical designer has to be physically present at all design intervertions. And physical agents, usually, leave some trace of themselves. I mean, betond the design itself.

    Of course the design itself is evidence of a designer. But in the case of a non physical designer, we don’t expect to find further physical evidence, beyond the design itself. In the case of a physical designer, I would expect something, especially considering the many acts of design in natural history.

    This is what I meant.

  121. 121
    PeterA says:

    GP @108:

    “Maybe you can look at this more detailed figure for the different stimuli, receptors and receptor connections to the activation pathway:
    https://rockland-inc.com/nfkb-signaling-pathway.aspx

    Oh, no! Wow!
    OK, you have persuaded me.
    I’m convinced now.

    Thanks!

  122. 122
    EugeneS says:

    GP

    Yes, of course. I agree. I have missed out ‘physical’.

    Maybe, it is a distraction from the thread but anyway. I recall one conversation with a biologist. I had posted something against Darwin’s explanation of why we can’t see another sort of life emerging. Correct me if I am wrong but my understanding is that, basically, Darwin claimed that organic compounds that would have easily become life are immediately consumed by the already existing life forms. I was saying that this is a rubbishy argument. But according to my interlocutor, it actually wasn’t. My friend said it wss extremely difficult to get rid of life in an experimental setting for abiogenesis. In relation to what we are discussing here, this claim effectively means that the existing life allegedly devours any signs of emerging life as soon as they appear. My answer at the time was, why don’t they put their test tubes in an autoclave? He said that this was not so easy as I thought as getting rid of existing life also destroys the organic chemicals, and defeats the purpose.

    Today, I still strongly believe it is a bad argument but for a different reason, i.e. due to the impossibility of the translation apparatus that relies on a symbolic memory and semiotic closure self-organizing. There is no empirical warrant to back the claim that such self-organization is possible.

    What do you think about Darwin’s argument and, in particular, about the difficulty of creating the right conditions for a clean abiogenesis experiment?

  123. 123
    gpuccio says:

    EugeneS:

    Of course they would never succeed, in an autoclave or elsewhere.

    I suppose that Darwin’s argument was that, in the absence of existing life, the first organic molecules generated (by magic, probably) would have been more stable than what we can expect today. Indeed, today simple organic molecules have very short life in any environment because of existing forms of life.

    The argument is however irrelevant. The simple truth is that simple organic molecules (Darwin was probably thinking of proteins, today they should be RNA to be fashionable) are completely useless to build life of any form.

    Let’s be serious: even if we take all components, membrane, genome, and so on, for example by disrupting bacteria, and put them together in a test tube, we can never build a living cell.

    This is the classic humpty dumpty argument, made here time ago, if I remember well, by Sal Cordova. It remains a formidable argument.

    All reasonings about OOL from inanimate matter are, really, nothing more than fairy tales, They don’t even reach the status of bad scientific theories.

  124. 124
    Silver Asiatic says:

    GPuccio

    Again, thank you for clarifications and even repeating things you stated before. It has been very helpful.
    I am not fully understanding several of your points which I will illustrate below:

    GP Science can investigate anything that produces observable facts. In no way it is limited to “matter”.

    Do you think that science can investigate God?

    And the designer needs not have “created” anything. Design is not creation.

    I believe that design is the ultimate creative act. Design is an action of creation with and for a purpose. It begins as a creative act in a conscious mind – a thought which did not exist before is created for a purpose. This thought is then implemented through various means. But how can there be design without creation? How can a purposeful act occur without it having been created by a mind?

    Not having a physical body does not necessarily mean that an entity is not subject to space and time.

    How are immaterial objects constrained by space and time? What measurements can be performed on immaterial entities?

    Indeed, ID is not evaluating anything about the designer…

    As I quoted you above ” Science can investigate anything that produces observable facts”, why is not ID evaluating the designer?

    The designer designs biological information. Not human consciousness, or any other consciousness,

    What scientific evidence do you have to show that the designer did not design human consciousness? Where do you think human consciousness comes from?

    Not “immaterial algorithms”. design is the configuration of material objects, starting from cosncious representations of the designer.

    Again, an algorithm is a process or set of rules used for calculation or programmatic purposes. A designer can create an immaterial algorithm in an agent that acts on biological entities. There could be no direct evidence of such a thing, but the effects of it can be seen in the development of biological organisms.

  125. 125
    Silver Asiatic says:

    GP

    design is the configuration of material objects

    I mentioned Mozart’s symphonies which were designed in his conscious mind. They weren’t designed on paper or by musical instruments.

    Also, if an immaterial entity created other immaterial entities, you would say “that is not an act of purposeful design”?

  126. 126
    PeterA says:

    GP @106:
    Regarding Fig. 1 in the OP:
    “the figure is there just to give a first general idea of the system”
    I agree. And it does it very well, specially within the context of the fascinating topic of your OP.
    Even without the missing information that you listed:

    Only two kinds of generic signals and receptors are shown. As we have seen, there are a lot of different specific receptors.
    The pathways that connect each specific type of receptor to IKK are not shown (they are shown as simple arrows). But they are very complex and specific. I have given some limited information in the OP and in the discussion.
    Only the canonical pathway is shown.
    Only the most common type of dimer is shown.
    Coactivators and interactions with other pathways are not shown or barely mentioned.
    Of course, lncRNAs are not shown.

    the figure has many details that give a convincing idea of functional complexity.
    Thus, after carefully studying the figure to understand the flow of functional information, you reveal how much is still missing, one can only wonder how would anyone believe that such a system could arise through unguided physio-chemical events.

  127. 127
    EugeneS says:

    GP

    Thanks very much. Could you point to the ‘humpty dumpty’ OP you mentioned?

  128. 128
    gpuccio says:

    Silver Asiatic:

    Do you think that science can investigate God?

    As said many times, I don’t discuss God in a scientific context.

    The correct answer is always the same: science can, and must, investigate, everything that can be observed in reality. And, from observed things, infer ever better models of reality. Given that very broad definition, there are no other limitations.

    I believe that design is the ultimate creative act. Design is an action of creation with and for a purpose. It begins as a creative act in a conscious mind – a thought which did not exist before is created for a purpose. This thought is then implemented through various means. But how can there be design without creation? How can a purposeful act occur without it having been created by a mind?

    You are equivocating on the meaning of “creation”. Of course all actd of design are “creative” in a very general sense. But of course, as everyone can understand, that was not the sense I was using. I was clearly speaking of “creation” in the specific philosophical/religious meaning: generating some reality from nothing. Design is not that. In materila objects, design gives specific configurations to existing matter.

    I always speak of design according to that definition, that I have given explicitly here:

    https://uncommondescent.com/intelligent-design/defining-design/

    This definition is the only one that is necessary in ID, because ID infer design from the material object.

    You speak of a “creative act in a conscious mind”. Maybe, maybe not. We have no idea of how thoughts arise in a conscious mind. Moreover, as we are not trying to build a theory of the mind, or of consciousness, we are not interested in that.

    The process of design begins when some form, already existing in the consciousness of the designer as a representation, is outputted to a material object. That is the process of design. That is what we want to infer from the material object. It is not creation, only the input of a functional configuration to an object.

    How are immaterial objects constrained by space and time? What measurements can be performed on immaterial entities?

    Energy is not material, yet it exists in space and time. Dark energy is probably not material: indeed, we don’t know what it is. Can you say that it cannot exist in relation to space and time? Strange, because it apparently accelerates the expansion of the universe, and that seems to be in relation, very strongly, with space and time.

    If we can or cannot measure something has nothing to do with the properties of that something. Things don’t wait for our measures to be what they are. Our ability to measure things evolves with our understanding of what things are.

    You quote me saying: “Indeed, ID is not evaluating anything about the designer…” and then you comment:

    As I quoted you above ” Science can investigate anything that produces observable facts”, why is not ID evaluating the designer?

    This is quote mining of the worst kind. The original statement was:

    ” Indeed, ID is not evaluating anything about the designer, except for what can be inferred by the observable effects of his interventions.”

    Shame on you.

    What scientific evidence do you have to show that the designer did not design human consciousness? Where do you think human consciousness comes from?

    Again, misinterpretation, maybe intentional. Of course I am speaking of what we can infer according to ID theory, The designer that we infer in ID is the designer of biological information. We infer nothing about the generation of consciousness (I don’t use the term design, because as I have explained I speak of design only for materila objects). As said, nobody here is trying to build a theory of consciousness. I have alredy stated clearly that IMO science has no real understanding of what consciousness is, least of all of how it originates. We can treat consciousness as a fact, because it can be directly observed, but we don’t understand what it is.

    Could the designer of ciological objects be also the originator of human consciousness? Maybe. Maybe not. I have nothing to infer an answer. Certainly not in ID theory. Which is what we are discussing here. And certainly I have no duty to show that the designer did not originate human consciousness, or that he did, because I have made absolutely no inferences about the origin of human consciousness. I have only said that we infer a designer for biological objects, not for human cosnciousness.

    Again, an algorithm is a process or set of rules used for calculation or programmatic purposes. A designer can create an immaterial algorithm in an agent that acts on biological entities. There could be no direct evidence of such a thing, but the effects of it can be seen in the development of biological organisms.

    Again, everything is possible. I am not interested in what is possible, but in what is supported by facts.

    You use the word “algorithm” to indicate mental contents. I have nothing against that, but it is not the way I use it, and it is of no interest for ID theory.

    Again, ID theory is about inferring a design origin for some material objects. To do that, we are not interested in what happens in the consciousness of the designer, those are issues for a theory of the mind. We only need to know that the form we oberve in the object originated from some conscious, intelligent and purposeful agent who inputted that form to the object starting from some conscious representation. If the configuration comes directly from a conscious being, design is proved.

    All thhis discussion about algorithms is because some people here believe that the designer does not design biological objects directly, but rather designs some other object, probably biological, which then after some time, deisgne the new biological objects by aòlgorithmic computation programmed originally by the designer.

    IOWs, this model assumes that the designer designs, let’s call it so, a “biological computer” which then designs (computes) new biological beings.

    I have said many times that I don’t believe in this strange theory, and I have given my reasons to confute it.

    However, in this theory the algorithm is not a conscious agent who designs: it is a biological machine, IOWs an object. That’s why in this discussion I use algorithm to indicate an object that can compute. Again, the algorithm is designed, because it is a configuration given to a biological machine by the designer, a configuration that can make computations.

    If you want to know if a mental algorithm in a mind is designed, I cannot answer, because I am not discussion a theory of the mind here. Certainly, it is not designed according to my definition, because it is not a material object.

    ID theory is simple, when people don’t try to pretend that it is complicated. We observe some object. We observe the configuration of the object. We ask ourselves if the object is designed, IOWs is the configuration we observe originated as a conscious representation in a conscious agent, and was then inputted purposefully into the object. We define an objective property, functional information, linked to some function that can be implemented using the object and that can be measured. We measure it. If the complexity of the function that can be implemented by the object is great enough, we infer a design origin for the object.

    That’s all.

  129. 129
    gpuccio says:

    EugeneS:

    I remember the argument mentioned by Sal Cordova, but it seems that the original argument was made by Jonathan Wells (or maybe someone else before him).

    Here is an OP by V. J. Torley (the old VJT 🙂 ), defending the argument. It gives a transcript of the argument bt Wells.

    https://uncommondescent.com/intelligent-design/putting-humpty-dumpty-back-together-again-why-is-this-a-bad-argument-for-design/

    IMO. the argument is extremely strong. OOL theories imagine that in some way some of the molecules necessary for life originated. That some life was produced.

    The simple fact is: we cannot produce life in any way, even using all the available molecules and structures that are associated to life on our whole planet.

    The old fact is still a fact: life comes only from life.

    Even when Venter engineers his modified genomes, he must put them in a living cell to make them part of a living being.

    When scientists clone organisms, they must use living cells.

    You cannot make a living cell from inanimate matter, however biologically structured it is.

    And yet these people really belive that natural events did generate living cells, from completely unstructured inanimate matter!

    It is simply folly. I will tell you this: if it were not for the simple ideological necessity that “it must have happened without design, because ours is the only game in town”, no serious scientist would ever consider for a moment any of the current theories for OOL. As I have said, they are not even bad scentific theories, They are mere imagination.

  130. 130
    gpuccio says:

    Silver Asiatic:

    I mentioned Mozart’s symphonies which were designed in his conscious mind. They weren’t designed on paper or by musical instruments.

    No. According to the definitions I have given, and that I always use when discussing ID. Mozart’s symphonies were designed when he put them on paper. Before that, they were conscious representations, and not designed objects. As said, we are not discussing how conscious representations take form in consciousness. In ID we are interested only in the design of objects.

    Also, if an immaterial entity created other immaterial entities, you would say “that is not an act of purposeful design”?

    Again, that would not be design in the sense I have given, Indeed, that problem has nothing to do with ID theory. Immaterial entities do not have a configuration that can be observed, and therefore no functional information can be measured for them. ID theory is not appropriate for immaterial entities. It is about designed objects.

  131. 131
    gpuccio says:

    For all interested:

    About polar bears, and in support of Behe’s ideas:

    Population Genomics Reveal Recent Speciation and Rapid Evolutionary Adaptation in Polar Bears

    https://www.cell.com/cell/fulltext/S0092-8674(14)00488-7

    Genes Associated with White Fur

    A white phenotype is usually selected against in natural environments, but is common in the Arctic (e.g., beluga whale, arctic hare, and arctic fox), where it likely confers a selective advantage. A key question in the evolution of polar bears is which gene(s) cause the white coat color phenotype. The white fur is one of the most distinctive features of the species and is caused by a lack of pigment in the hair. We find evidence of strong positive selection in two candidate genes associated with pigmentation, LYST and AIM1 (Table 1). LYST encodes the lysosomal trafficking regulator Lyst. Melanosomes, where melanin production occurs, are lysosome-related organelles and have been implicated in the progression of disease associated with Lyst mutation in mice (Trantow et al., 2010). The types and positions of mutations identified in LYST vary widely, but Lyst mutant phenotypes in cattle, mice, rats, and mink are characterized by hypopigmentation, a melanosome defect characterized by light coat color (Kunieda et al., 1999, Runkel et al., 2006, Gutiérrez-Gil et al., 2007). LYST contains seven polar bear-specific missense substitutions, in contrast to only one in brown bear. One of these, a glutamine to histidine change within a conserved WD40-repeat containing domain, is predicted to significantly affect protein function (Figure 5B, Table S7). Three polar bear changes in LYST are located in proximity to the N-terminal structural domain and map close to human mutations associated with Chediak-Higashi syndrome, a hair and eyes depigmentation disease (Figure 5C). We predict that all these protein-coding changes, possibly aided by regulatory mutations or interactions with other genes, dramatically suppress melanin production and transport, causing the lack of pigment in polar bear fur. Variation in expression of the other color-associated gene, AIM1 (absent in melanoma 1), has been associated with tumor suppression in human melanoma (Trent et al., 1990), a malignant tumor of melanocytes that affects melanin pigment production.

    See also comments #75 and #112.

  132. 132
    ET says:

    Again- polar bears do NOT have white fur. That is elementary school level knowledge in Massachusetts. “Lack of pigmentation”? It’s a translucent hollow tube! Luminescence- when sunlight shines on it there is a reaction we call luminescence (another great word for sobriety check points). The skin is black.

    To claim that differential accumulation of genetic accidents, errors and mistakes just happened upon luminescence for polar bears, is extraordinary and without a means to test it. Count the number of specific changes already discussed and compare that to waiting for TWO mutations. You will see there isn’t enough time in the universe for Darwinian processes to pull it off.

  133. 133
    OLV says:

    GP @131:

    About polar bears, and in support of Behe’s ideas:
    Population Genomics Reveal Recent Speciation and Rapid Evolutionary Adaptation in Polar Bears

    Here’s another article also mentioning the cute polar bears:

    Genetic diversity of CHC22 clathrin impacts its function in glucose metabolism
    Matteo Fumagalli, Stephane M Camus, Yoan Diekmann, Alice Burke, Marine D Camus, Paul J Norman, Agnel Joseph, Laurent Abi-Rached, Andrea Benazzo, Rita Rasteiro, Iain Mathieson, Maya Topf, Peter Parham, Mark G Thomas, Frances M Brodsky

    eLife 2019;8:e41517 DOI: 10.7554/eLife.41517

    CHC22 clathrin plays a key role in intracellular membrane traffic of the insulin-responsive glucose transporter GLUT4 in humans. We performed population genetic and phylogenetic analyses of the CHC22-encoding CLTCL1 gene, revealing independent gene loss in at least two vertebrate lineages, after arising from gene duplication. All vertebrates retained the paralogous CLTC gene encoding CHC17 clathrin, which mediates endocytosis. For vertebrates retaining CLTCL1, strong evidence for purifying selection supports CHC22 functionality. All human populations maintained two high frequency CLTCL1 allelic variants, encoding either methionine or valine at position 1316. Functional studies indicated that CHC22-V1316, which is more frequent in farming populations than in hunter-gatherers, has different cellular dynamics than M1316-CHC22 and is less effective at controlling GLUT4 membrane traffic, altering its insulin-regulated response. These analyses suggest that ancestral human dietary change influenced selection of allotypes that affect CHC22’s role in metabolism and have potential to differentially influence the human insulin response.

     It is also possible that some forms of polar bear CHC22 are super-active at GLUT4 sequestration, providing a route to maintain high blood glucose, as occurs through other mutations in the cave fish (Riddle et al., 2018).

    Regulators of fundamental membrane traffic pathways have diversified through gene duplication in many species over the timespan of eukaryotic evolution. Retention and loss can, in some cases, be correlated with special requirements resulting from species differentiation

    The genetic diversity that we report here may reflect evolution towards reversing a human tendency to insulin resistance and have relevance to coping with increased carbohydrate in modern diets.

     
    And here’s another one;

    Environmental contaminants modulate the transcriptional activity of polar bear (Ursus maritimus) and human peroxisome proliferator-activated receptor alpha (PPARA)
    Heli Routti, Mari K. Berg, Roger Lille-Langøy, Lene Øygarden, Mikael Harju, Rune Dietz, Christian Sonne & Anders Goksøyr 

    Scientific Reports   volume 9, Article number: 6918 (2019)

    DOI: 10.1038/s41598-019-43337-w

    Peroxisome proliferator-activated receptor alfa (PPARA/NR1C1) is a ligand activated nuclear receptor that is a key regulator of lipid metabolism in tissues with high fatty acid catabolism such as the liver. Here, we cloned PPARA from polar bear liver tissue and studied in vitrotransactivation of polar bear and human PPARA by environmental contaminants using a luciferase reporter assay. Six hinge and ligand-binding domain amino acids have been substituted in polar bear PPARA compared to human PPARA. Perfluorocarboxylic acids (PFCA) and perfluorosulfonic acids induced the transcriptional activity of both human and polar bear PPARA. The most abundant PFCA in polar bear tissue, perfluorononanoate, increased polar bear PPARA-mediated luciferase activity to a level comparable to that of the potent PPARA agonist WY-14643 (~8-fold, 25??M). Several brominated flame retardants were weak agonists of human and polar bear PPARA. While single exposures to polychlorinated biphenyls did not, or only slightly, increase the transcriptional activity of PPARA, a technical mixture of PCBs (Aroclor 1254) strongly induced the transcriptional activity of human (~8-fold) and polar bear PPARA (~22-fold). Polar bear PPARA was both quantitatively and qualitatively more susceptible than human PPARA to transactivation by less lipophilic compounds.

    it should be kept in mind that polar bear metabolism is highly adapted to cold climate and feeding and fasting cycles, and direct comparison of physiological functions between polar bears and humans is thus challenging.

     
    Here’s an article about the brown bears that mentions the polar bear cousins too:

    Evolutionary history and palaeoecology of brown bear in North-East Siberia re-examined using ancient DNA and stable isotopes from skeletal remains
    Alba Rey-Iglesia, Ana García-Vázquez, Eve C. Treadaway, Johannes van der Plicht, Gennady F. Baryshnikov, Paul Szpak, Hervé Bocherens, Gennady G. Boeskorov & Eline D. Lorenzen 

    Scientific Reports   volume 9, Article number: 4462 (2019)

    DOI: 10.1038/s41598-019-40168-7

    The mtDNA of extant polar bears (Ursus maritimus), clade 2b, is embedded within brown bears and is most closely related to clade 2a, the ABC brown bears18.

     

  134. 134
    jawa says:

    Is it possible that the polar bears were affected by drinking so much Coca-Cola in TV commercials?
    🙂

  135. 135
    PeterA says:

    GP @129:

    Thanks for referencing the discussion about the Humpty Dumpty argument. Very interesting indeed.

  136. 136
    jawa says:

    If all of king’s horses and all of king’s men couldn’t put Humpty together again, who else can do it?
    🙂

  137. 137
    pw says:

    GP,

    I appreciate your answers at 107.
    Please, let me ask you another question:
    Why is there a drop in the black line in the last graphic in your OP? What does that mean? Loss of function?

  138. 138
    Silver Asiatic says:

    Gpuccio

    I responded to your statement:

    Science can investigate anything that produces observable facts.

    You then said:

    The correct answer is always the same: science can, and must, investigate, everything that can be observed in reality.

    Those two statements actually conflict with each other. You ask me to assume your meaning of various terms (as if the meaning is obvious) but in this case, I assume that your first statement is incorrect and you corrected it with the second.

    You are equivocating on the meaning of “creation”. Of course all acts of design are “creative” in a very general sense.

    I was using the general and ordinary meaning of the term “design”. Whatever is designed, even if using previously existing material, is an act of creation. If that which at one moment was inanimate matter, suddenly, by an act of an intelligent agent becomes a living organism – that is a creation. The designer created something that did not exist before. You limited the term creation to only those acts which are ex nihilo but that’s an artificial limit.

    ID science is not limited to the study of biology. ID also looks at the origin of the universe. In that case, ID is making a claim about the origin of time, space and matter. It is not limited to reconfigurations of existing matter.

    You quote me saying: “Indeed, ID is not evaluating anything about the designer…” and then you comment:
    As I quoted you above ” Science can investigate anything that produces observable facts”, why is not ID evaluating the designer?
    This is quote mining of the worst kind. The original statement was:
    ” Indeed, ID is not evaluating anything about the designer, except for what can be inferred by the observable effects of his interventions.”
    Shame on you.

    You’re trying to blame me for something here, but what you quoted did not answer the question. You avoided answering it when I asked about God also. You say that science can investigate anything that produces observable facts. You explain that by saying science can only make inferences from observable effects. As I said before, those two ideas contradict. In the first (bolded) you say that science can investigate “the producer” of the facts. You then shame me for asking why ID cannot investigate the designer by saying that ID can investigate the observable effects. As I said above, you corrected your first statement with the second – but you should not have blamed me for something that merely pointed to the conflict here.

    I’m not trying to trick or trap you or win anything. You make a statement that contradicts everything I had known about ID, as well as what contradicts science itself (that science can investigate anything that produces observations). I’m not really worried about your personal views on these things, I was just interested in what seemed to be a confused approach to the issue.

    The designer that we infer in ID is the designer of biological information.

    As above, the designer we refer to in ID is the designer of the universe, not merely of biological information. We infer something about the generation of consciousness. In fact, the immaterial quality of consciousness is evidence in support of ID. We look for the origin of that which we can observe.

    We infer nothing about the generation of consciousness (I don’t use the term design, because as I have explained I speak of design only for materila objects). As said, nobody here is trying to build a theory of consciousness.

    Mainstream evolution already assumes that consciousness is an evolutionary development. I have read ID researchers who have spoken about the irreducible quality of consciousness as evidence of design. Consciousness separates humans from non-human animals. Evolutionary theory offers an explanation, and ID (not your version of ID but others) offers an opposing one.

  139. 139
    OLV says:

    More on the cute polar bears:

    Polar bear evolution is marked by rapid changes in gene copy number in response to dietary shift
    David C. Rinker, Natalya K. Specian, Shu Zhao, and John G. Gibbons

    PNAS July 2, 2019 116 (27) 13446-13451;  
    DOI: 10.1073/pnas.1901093116

     
    Copy number variation describes the degree to which contiguous genomic regions differ in their number of copies among individuals. Copy number variable regions can drive ecological adaptation, particularly when they contain genes. Here, we compare differences in gene copy numbers among 17 polar bear and 9 brown bear individuals to evaluate the impact of copy number variation on polar bear evolution. Polar bears and brown bears are ideal species for such an analysis as they are closely related, yet ecologically distinct. Our analysis identified variation in copy number for genes linked to dietary and ecological requirements of the bear species. These results suggest that genic copy number variation has played an important role in polar bear adaptation to the Arctic.

    Polar bear (Ursus maritimus) and brown bear (Ursus arctos) are recently diverged species that inhabit vastly differing habitats. Thus, analysis of the polar bear and brown bear genomes represents a unique opportunity to investigate the evolutionary mechanisms and genetic underpinnings of rapid ecological adaptation in mammals. Copy number (CN) differences in genomic regions between closely related species can underlie adaptive phenotypes and this form of genetic variation has not been explored in the context of polar bear evolution. Here, we analyzed the CN profiles of 17 polar bears, 9 brown bears, and 2 black bears (Ursus americanus). We identified an average of 318 genes per individual that showed evidence of CN variation (CNV). Nearly 200 genes displayed species-specific CN differences between polar bear and brown bear species. Principal component analysis of gene CN provides strong evidence that CNV evolved rapidly in the polar bear lineage and mainly resulted in CN loss. Olfactory receptors composed 47% of CN differentiated genes, with the majority of these genes being at lower CN in the polar bear. Additionally, we found significantly fewer copies of several genes involved in fatty acid metabolism as well as AMY1B, the salivary amylase-encoding gene in the polar bear. These results suggest that natural selection shaped patterns of CNV in response to the transition from an omnivorous to primarily carnivorous diet during polar bear evolution. Our analyses of CNV shed light on the genomic underpinnings of ecological adaptation during polar bear evolution.

  140. 140
    gpuccio says:

    ET:

    “Again- polar bears do NOT have white fur. That is elementary school level knowledge in Massachusetts.”

    OK, we have no polar bears here in Italy, so I cannot share your expertise! 🙂

    So, I read a little about the issue.

    Polar bear’s fur is hollow and lacks any pigment. Indeed, it is rather transparent. The white color is due to optical effects. And the skin is black, as you say.

    Brown bears has a fur that is solid and pigmented.

    OK, what does that mean?

    First of all, let’s say that the fact that the fur is not really white is not important in relation to the supposed selection of white in polar animals, because indeed polar bears appear white, so to the purpose of the supposed positive selcetion there is no real difference.

    But that is not the real point, I would say.

    The real point is: what is the mechanism of the divergence between brown bears and polar bears? The paper I mentioned puts the split at about 500000 years ago, that is not much. Some give a few million years. Whatever, it is certainly a rather recent event in evolutionary history.

    So, can the divergence be explained by neo-darwinian mechanisms, or is it the result of design? Or of some biological algorithm embedded in the common ancestor?

    The paper I mentioned of course has a neo-darwinian answe, but that could hardly be different.

    Behe thinks that this can be a case of darwinian “devolution”: differentiation through loss of function which goves some environmental advantage.

    You are definitely in favor of design (or an adaptation algorithm, I am not sure).

    Who is right?

    I think this is a case that shows clearly how ID theory is necessary to give good answers to that kind of problems.

    IOWs, we can answer only if we can evaluate the functional complexity of the divergence.

    The problem is that I cannot find any appropriate data in all the source that have been mentioned, or that I could find in my brief search, to do that. Why? Because nobody seems to know the molecular basis for the difference in fur structure and pigmentation. And it is not completely clear how functionally important the polar bear fur structure is, even if it is generally believe that it is under positive selection, therefor somehow functional in the appropriate environment.

    If you have some better data, please let me know.

    Of course, fur is not the only difference, but for the moment let’s focus on that.

    So, from an ID point of view, we have different possible scenarios, if we could measure the functional information behind the difference in fur structure and pigmentation.

    To safely infer design according to the classic procedure, we need some function that implies more than 500 bits of functional information.

    However, as we are dealing here with a population (bears) rather limited in number and slow-reproducing, and with a rather short time window, I would be more than happy with 150 bits of functional information to infer design in this case.

    The genomic differences highlighted in the paper I quoted seem to be rather simple. Most of them can be interpreted as one aminoacid mutations with loss of function, perfectly in the range of neo-darwinism and of Behe’s model. But I have no idea if those simple genetic differences are enough to explain what we observe. The lack of pigmentation is probably easier to explain. For the hollow structure, I have no ideas.

    The problem is: we have to know the molecular basis, otherwise no computation of functional information can be made. Because, as we know, there are sometimes big morphological differences that have a vert simple biological explanation, and vice versa. So again, I must ask: have you any data about the molecular foundation of the differences?

    In the meantime, I would say that yhe scenarios are:

    1) The differences can be explained by one or more independent mutations affecting functions already present. Or, at most, 2 or 3 coordinated mutations where each one affects the same function in a relevant way, so that NS could intervene at each step (IOWs a simple tweaking pathway of the loss of function, as we see for example in antibiotic resistance). These scenarios are in the range of what RV + NS could in principle do, maybe even in a population like bears. In this case, I would accept a neo-darwinian mechanism as a reasonable explanation, until different data are discovered.

    2) The differences imply a gain in functional information of 150+ bits. We can safely infer design. Polar bears were designed, some time about 400000 years ago, or a little more.

    3) The differences imply something between 12 bits (3 AAs) and 150 bits. In this case, It would be wise to remain cautious. It is not the best scenario to infer design, even if it is rather unlikely for a neo-darwinian mechanism in that kind of population. Maybe some simple active adaptation algorithm embedded in brown bears could be considered. But such an algorithm should be in some way detailed and shown to be there, not only imagined.

    IMO, this is how ID theory works. Through facts, and objective measurements of functional information. There is no other way.

    Just a final note about the “waiting for two mutations” paper. That is of course a very interesting article. But it is about two coordinated mutations needed to generate a new function, none of which individually confers any advantage. IOWs, this is more or less the scenario of chloroquine resistance, again linked to Behe.

    I agree that such a scenario, even if possible, is extremely unlikely in a population like bears. But the simple fact is that almost all the variations considered by Behe in his reasonings about devolution are very simple. One mutation is often enough to lose a function. One frameshift mutation can inactivate a whole protein, losing maybe thousands of bits of functional information. And we can have a lot of such individual independent mutations in a population like bears in 400000 years.

    So, unless we have better data on the functional information involved in the transition to polar bears, I suspend any judgement.

  141. 141
    gpuccio says:

    Jawa at #134:

    “Is it possible that the polar bears were affected by drinking so much Coca-Cola in TV commercials?”

    Absolutely!

    Let’s wait: if I develop translucent fur in the next few years, that will be a strong argument in favour of your hypothesis! 🙂

  142. 142
    ET says:

    1- Bears with actual white fur exist

    2- There are grizzly (brown) bears with actual white fur. They are not polar bears.

    3- I am looking at the number of specific mutations it would take to get a polar bear from a common ancestor with brown bears. That would tell me if blind and mindless processes are up to the task. The paper gpuccio provided gives us a hint and it already goes against blind and mindless processes.

  143. 143
    gpuccio says:

    Pw at #137:

    “Why is there a drop in the black line in the last graphic in your OP? What does that mean? Loss of function?”

    You mean the small drop in amphibians in the blue line (BCL10)?

    Yes, that kind of pattern can be observed often enough, usually in one or two classes.

    The strict meaning is that the best homology hit in that class was lower than in the older class.

    Here the effect is small, but sometimes we can see a whole unexpected drop in one class of organisms, while the general pattern is completelt consistent in all the other ones.

    Technically, we are speaking of human conserved information. That’s what is measured here.

    Probably, it is a loss of function in relation to that protein in that class. That is perfectly compatible with Behe’s concept of devolution. That form of the protein seems somtemise to be completely lacking in one class.

    In some cases, it could also be a technical error in the databases, or in the blast algorithm. We can expect that, it happens. Some of the classes I have considered are more represented in the databases, some less. However, if one proteins lacks any relevant homology in one class in my graphic, that means that none of the organisms in that class showed any relevant homology, because I always consider the best hit among all the proteins of all the organisms os that class included in the Ncbi databases.

  144. 144
    gpuccio says:

    ET at #142:

    Thank you for the further clarifications about bears. You are really an expert! 🙂

    However, it is not really the numser of specific mutations that counts. It is the number of coordinated mutations necessaty to get a function, none of which has any functional effect alone. There is a big difference. I have tried to explain that at #140.

  145. 145
    ET says:

    Thank you, gpuccio. We have a little impasse as I think it is the number of specific mutations and the functions are all the physiological changes afforded by them.

    In his book “Human Errors”, Nathan Lents tells us that it is highly unlikely that one locus will receive another mutation after already getting mutated. And yet it has the same probability for change as any other site. So it looks like evolutionists are talking about the probability of a specific mutation happening regardless of function.

    As for bears- living in Massachusetts I run into black bears all of the time. They come up on my deck at night. I have photos of them in my yard. And being a dog-person I have a keen interest. That’s all- I think they are really cool animals.

  146. 146
    gpuccio says:

    ET:

    Thanks to you! 🙂

    I suspected you had some special connection with bears! I am more a cat guy, but I do understand love and interest for all animals. 🙂

  147. 147
    jawa says:

    ET,
    The Massachusetts bears may be cool animals, but didn’t get hired for Coca-Cola TV ads like their polar cousins. 🙂

  148. 148
    gpuccio says:

    Silver Asiatic at #138:

    I responded to your statement:

    Science can investigate anything that produces observable facts.

    You then said:

    The correct answer is always the same: science can, and must, investigate, everything that can be observed in reality.

    Those two statements actually conflict with each other. You ask me to assume your meaning of various terms (as if the meaning is obvious) but in this case, I assume that your first statement is incorrect and you corrected it with the second.

    Oh, good heavens! That’s what happens when someone (you) discusses not to understand and be understood, but just to generate confusion. You are of course equivocating on the word “investigate”.

    Maybe the second from is more precise, but the meaning is the same.

    However, let’s clarify, for those who can be confused by your playing with words.

    Science always starts from facts: what can be observed.

    But science tries to explain facts building theories (maps of reality). Those theories need not include only what is observable. They just need to explain observed facts. For example, most scientific theories are based on mathematics, which is not something observable.

    Another example. Most theories in empirical science are about possible relationships of cause and effect. But the relarionship of cause and effect is not something that can be observed.

    My error was probably to use the word “investigate”, which was ambiguous enought to allow you to play with it.

    OK, let’s say that science can build hypotheses only to explain observed facts, but of course those hypotheses, those maps of reality, can include any cognitive content, if it is appropriate to the explanation.

    The word “evaluate” can refer of course both to the gathering of facts and to the building of theories.

    My original statement was.

    ” Indeed, ID is not evaluating anything about the designer, except for what can be inferred by the observable effects of his interventions.”

    Wasn’t it clear enough for you?

    I was using the general and ordinary meaning of the term “design”. Whatever is designed, even if using previously existing material, is an act of creation. If that which at one moment was inanimate matter, suddenly, by an act of an intelligent agent becomes a living organism – that is a creation. The designer created something that did not exist before. You limited the term creation to only those acts which are ex nihilo but that’s an artificial limit.

    The problem here is not the meaning of the word design, but the meaning of the word creation. The word creation here, in this blog and I would say in the whole debate about ID and more, is used in the sense of “creation ex nihilo”, something that only God can do. Why do you think that our adversaries (maybe you too) call us “creationists” and not “designists”?

    It’s strange that one like you, that has been coming here for some time, is not aware of that, and suddenly inteprets “creation” in this debate as a statement about a movie or a book.

    However, the problem is not the meaning of words, For that, it’s enough to clarify what we mean. Clearly, and without word plays.

    More in next post.

  149. 149
    jawa says:

    GP @141:

    But even in the case where you would develop translucent fur, I hope you’ll keep writing OPs for us here, right?

    🙂

  150. 150
    john_a_designer says:

    Gpuccio and Silver Asiatic,

    A few of my thoughts about the relationship between science, philosophy, theology and religion.

    Creationism is based on a religious text– the Jewish-Christian scriptures. ID, on the other hand, is at the very least a philosophical inference from the study of nature itself.

    Even materialists recognize the possibility that nature is designed. Richard Dawkins, for example, has argued that “Biology is the study of complicated things that give the appearance of having been designed for a purpose.”

    He then goes on to argue that it is not designed.

    So what is Dawkins argument? Let’s try out his quote as the main premise in a basic logical argument.

    Premise 1: “Biology is the study of complicated things that give the appearance of having been designed for a purpose.”

    Premise 2: Dawkins (a trained zoologist) believes that “design” is only an appearance.

    Conclusion: Therefore, nothing we study in the biosphere is designed.

    The conclusion is based on what? Are Dawkin’s beliefs and opinions self-evidently true? Is the science settled as he suggests? If the answer for those two questions is no (Dawkin’s arguments BTW are by no means conclusive) then what is the reason for not looking at living systems that have “the appearance of having been designed for a purpose?” Couldn’t they really have been designed for a purpose? That is a basic justification for ID. It begins from a philosophical neutral position (that some things could really be designed) whereas a committed Darwinian like Dawkins, along with other “committed” materialists, begins with the logically fallacious assumption that design is impossible.

  151. 151
    gpuccio says:

    Silver Asiatic at #138:

    ID science is not limited to the study of biology. ID also looks at the origin of the universe. In that case, ID is making a claim about the origin of time, space and matter. It is not limited to reconfigurations of existing matter.

    That’s correct. The cosmological argument, especially in the form of fine tuning, is certainly part of the ID debate.

    But here I have never discussed the cosmological argument in detail. I think it is a very good argument, but many times I have said that it is different from the biological argument, because it has, inevitably, a more philosophical aspect and implication.

    I have always discussed the biological argument of ID here, and it is also the main object of discussion, I belieev, since the ID movement started. Dembski, Behe, Meyer, Abel, Berlinski and others usually refer mainly to the biological argument. So I apologize if that created some confusion: all that I say about ID is referred to the biological argument. And biological design always happens in space and time.

    You’re trying to blame me for something here, but what you quoted did not answer the question. You avoided answering it when I asked about God also. You say that science can investigate anything that produces observable facts. You explain that by saying science can only make inferences from observable effects. As I said before, those two ideas contradict. In the first (bolded) you say that science can investigate “the producer” of the facts. You then shame me for asking why ID cannot investigate the designer by saying that ID can investigate the observable effects. As I said above, you corrected your first statement with the second – but you should not have blamed me for something that merely pointed to the conflict here.

    As I have explained, there is no conflict at all. Of course the word “investigate” refers both to the analysis of facts and to the building of hypotheses. Every action of the mind in relation to science is an “investigation” and an “evaluation”, IOWs a cognitive activity in search of some truth about reality.

    I think I have been clear enough at #128:

    “The correct answer is always the same: science can, and must, investigate, everything that can be observed in reality. And, from observed things, infer ever better models of reality. Given that very broad definition, there are no other limitations.”

    That should be clear, even to you. There are no limitations. If a concept of god were necessary to build a better scientific model of reality that explains observed things, there is no problem: god can be included in that model.

    But I refuse, and always will refuse, in a scientific discussion, to start from some philosophical or religious idea of God and allow, without any conscious resistance on my part, that such idea influence my scientific reasoning. Science should work, or try to work, independently from any pre-conceived worldview. If scientific reasonings brings to the inclusion, or to the exclusion, of God in a good map of reality, scientific reasoning should follow that line of thought and impartially test it. The opposite is not good, IMO.

    I hope that’s clear enough.

    I’m not trying to trick or trap you or win anything. You make a statement that contradicts everything I had known about ID, as well as what contradicts science itself (that science can investigate anything that produces observations). I’m not really worried about your personal views on these things, I was just interested in what seemed to be a confused approach to the issue.

    Neither am I. I am trying to clarify. When I don’t understand well what my interlocutor is saying, I ask. When they ask me, I answer. That’s the way.

    It’s strange that my statements contradict everything you have known of ID. My application of the ID procedure for design inference is very standard, maybe with some more explicit definition. About God, an issue that I never discuss here for the reasons I have given, it is rather clear that all the official ID movement unanimously states that the design inference from biology tells nothing about God. Indeed, ID defenders are usually reluctant to tell anything about the biological designer.

    I want to clarify well my position about that, even if I have been explicit many times here.

    1) I absolutely agree with the idea that there is no need to say anything about the designer to make a valid design inference. This is a pillar of the ID thoughtm and it is perfectly correct. I ofetn say that the designer can only be describet as some conscious, intelligent and purposeful agent. But that is implicit in the definition of design, it is not in any way something we infer about any specific designer.

    2) That said, I have always been available here, maybe more than other ID defenders, to make reasonable hypotheses about the biological designer in the measure that those hypotheses can be reasonably driven from known facts. That’s what I have done at #100 and #101, trying to answer a number of questions that you had asked. I know very well that trying to reason scientifically about those issues is always a sensitive matter, both for those in my filed and for those in the other. Or maybe just in.between. But I do believe that science must pursue all possible avenues of thought, provided that we always start form observable facts and are honest in building our theories.

    Knowing that, I have also added, at the end of post #101:

    “That’s the best I can do to answer your questions. Just a note: my answers here are highly tentative, but they are strictly empirical. They do not derive from any theological agenda. These are the ideas that, in my opinion, are more reasonable to explain the facts as we know them. Right or wrong that they may be, this is the spirit in which I express them.”

    I can only repeat my statement: That’s the best I can do to answer your questions.

    More in next post.

  152. 152
    Silver Asiatic says:

    GP

    My error was probably to use the word “investigate”, which was ambiguous enought to allow you to play with it.

    I wasn’t “playing” with it. I was helping you clarify your statement. I’m not trying to say gotcha. I sincerely thought you believed that science could investigate (directly evaluate, measure, analyze) anything (like God) that produces observable facts.
    I kept in mind that you said that science is not limited by matter. I’d conclude from that a belief that science can investigate (evaluate, analyze, measure, observe, describe) immaterial entities. You cited a philosophy of science to support that view. How am I supposed to know what you are thinking of? I asked you if science could “investigate” God, but you didn’t want to answer that.

    Again, normally IDists would not say that science can Directly investigate, evaluate, analyze, measure or describe immaterial entities. You seem to disagree with that.

    Indeed, ID is not evaluating anything about the designer, except for what can be inferred by the observable effects of his interventions.

    Evaluation is not the gathering of facts. Collecting facts comes from observation, measurement, or investigation. Evaluation can create some facts (such as logical conclusions) but in science it all must start with observation. After that, we can evaluate. To infer is to draw a logical conclusion from observations and evaluation.

    As I have heard other ID theorists state, ID cannot observe anything about an immaterial designer or designers. I think you disagree with this. The only thing ID attempts to do is show that there is evidence of Intelligence at work. The effects that we observe in nature could have been produced by millions of designers, each one of which has less intelligence than a human being, but collectively create design in nature. If you are speaking about a designer that exists outside of space and time, then we do not have any experience with that.
    We can observe various effects, but not the entity itself.
    It seemed that you disagree with this and believe instead that science can directly observe an immaterial designer (or any immaterial entity) that produces effects in reality.

  153. 153
    gpuccio says:

    Silver Asiatic at #138:

    Let’s see your last statements.

    As above, the designer we refer to in ID is the designer of the universe, not merely of biological information.

    That’s not correct. As said, the inference of a designer for the universe, and the inference of a biological designer are both part of ID, but they are different and use completely different observed facts. Therefore, even if both are correct (which I do believe), there is no need that the designer of the universe is the same designer as the designer of biological information. I don’t follow your logic.

    We infer something about the generation of consciousness.

    ??? Again, I can’t follow you. Who is “we”? I am not aware that ID, especially in its biological form, but probably also in the cosmological form, is inferring anything about “the generation of consciousness”. Why do you say that?

    In fact, the immaterial quality of consciousness is evidence in support of ID.

    No. Big epistemological errors here. Consciousness is a fact, because we can directly observe it. Being a fact, anyone can use its existence as evidence for what one likes.

    But “the immaterial quality of consciousness” is a theory, not a fact. It’s a theory that I accept in my worldview and philosophy, but I would not say that we have incontrovertible scientific evidence for it. Maybe strong scientific evidence, at best. But the important point is: a theory is not a fact. It is never evidence of anything. A theory, however good, needs the support of facts as evidence. it is not evidence for other theories. At most, it is more or less compatible with them.

    We look for the origin of that which we can observe.

    Correct, and as consciousness can be observed, it is perfectly reasonable to look for some scientific theory that explains its origin. But that theory is not ID. As I have said, ID is not a theory about the origin of consciousness. It is a theory that says that conscious agents are the orign of designed objects. I believe that you can see the difference.

    Mainstream evolution already assumes that consciousness is an evolutionary development.

    Mainstream evolution assumes a lot of things. Most of them are wrong. And so?

    I have read ID researchers who have spoken about the irreducible quality of consciousness as evidence of design.

    Who? Where? As far as I know, complex specified information (or complex functional information) in objects has always been considered the mark of design. Dembski, Behe, Abel, Meyer, Berlinski, and so on.

    Consciousness separates humans from non-human animals.

    ??? Why do you say that? I believe that a cat or a dog are conscious. And I think that most ID thinkers would agree.

    Ask ET about nears! 🙂

    Evolutionary theory offers an explanation, and ID (not your version of ID but others) offers an opposing one.

    An explanation for what? For the origin of consciousness? But what ID sources have you been perusing?

    One of the most famous ID icons is the bacterial flagellum, since Behe used it to explain the concept of irreducible complexity (a concept linked to functional complexity). Is that an explanation of human consciousness? I can’t see how.

    Meyer has written a whole book about OOL and a whole book about the Cambrian explosion. Are those theories about the origin of human consciousness?

    Of course ID thinkers certainly believe that some special human functions, like reason, are linked to the specific design of humans. But it is equally true that the special functions of bacteria (like the CRISPR system) are certainly linked to the specific design of bacteria. The desing inference is perfectly valid in both cases.

    But consiousness is not “a function”. It is much more. It is a component of reality that we cannot in any way explain by objective configurations of external things. ID is not a theory of consciousness.

  154. 154
    gpuccio says:

    Jawa at #149:

    Maybe translucent OPs. 🙂

  155. 155
    Silver Asiatic says:

    JAD

    ID, on the other hand, is at the very least a philosophical inference from the study of nature itself.

    It’s a complicated issue and I can see where you are going with this. At the same time, I think many prominent IDists will say that ID is not a philosophical inference. It’s a scientific inference from what science already knows about the power of intelligence. So, something is observed that appears to be the product of intelligent design, then science evaluates the probability that it came from natural causes. If that probability is too remote, intelligent design becomes the best answer since we know that intelligence can design things like that which has been observed.

    On the other hand, with your view, there are different philosophical starting points for both ID and Dawkins. So, depending on what we mean it may be correct to say that ID is really a philosophical inference. It’s a different philosophy of science than that of Dawkins. I think Dembski and Meyer would disagree with this. They have attempted to show that ID uses exactly the same science as Dawkins does.

  156. 156
    gpuccio says:

    John_a_designer at #150:

    I agree with what you say. I just want to clarify that:

    1) IMO Dawkin’s biological arguments are very bad, but at least they are a good incarnation of true neo-darwinism, thereofre easy to confute. In that sense, he is better than many post-post-neo-darwinists, whose thoughts are so ethereal that you cannot even catch them! 🙂

    2) On the contrary, Dawkin’s philosohical arguments are arrogant, superficail and ignorant. Unbearable. He should stick to being a bad thinker about biology.

    3) To be fair to Dawkins, I don’t think that he assumes that “design is impossible”. On the contrary, he is one of the few who admit that design could be a scientific explanation. He just does not accept it as a valid scientific explanation. That is epistemologically correct, even if of course completely wrong in the essence.

  157. 157
    Silver Asiatic says:

    GP

    ??? Again, I can’t follow you. Who is “we”? I am not aware that ID, especially in its biological form, but probably also in the cosmological form, is inferring anything about “the generation of consciousness”. Why do you say that?

    Your use of multiple question-marks and the personal digs (“even you can understand”) indicate to me that this conversation is getting too heated. You apologized previously, so thank you. I’ll also apologize for the tone of my remarks.

    You asked about ID and consciousness:

    Yet the adequacy of matter to generate agency (or apparent agency) is fundamental to both the problem of consciousness and the problem of the origins of biological complexity. If immaterial explanations are necessary to explain the agency inherent to the mind, then the view that immaterial explanations are necessary to explain the agency apparent in living things gains considerable traction.
    https://evolutionnews.org/2008/12/consciousness_and_intelligent/

    Michael Egnor writes about consciousness as evidence supporting ID. I think here, BornAgain77 often posts resources that support this concept. I understand that your interest is in biological ID, and therefore limited to biological designer or designers.

    You answered my questions adequately. Again, I appreciate your comments and I apologize for any misunderstandings that may have arisen in this conversation.

  158. 158
    jawa says:

    Richard Dawkins’ books should be in the “cheap philosophy” section of bookstores. But instead they have them in the Science section.
    Specially after Professor Denis Noble has discredited them. Bizarre.

  159. 159
    gpuccio says:

    Silver Asiatic at #152:

    I wasn’t “playing” with it. I was helping you clarify your statement.

    Well, I hope I have clarified it. Thank you for the help.

    Again, normally IDists would not say that science can Directly investigate, evaluate, analyze, measure or describe immaterial entities. You seem to disagree with that.

    Well, it seems that I have not clarified enough. Please, read again what I have written. Here are some more clues:

    1) “investigate, evaluate, analyze, measure or describe” are probably too many different words. I quote myself:

    “But science tries to explain facts building theories (maps of reality). Those theories need not include only what is observable. They just need to explain observed facts. For example, most scientific theories are based on mathematics, which is not something observable.

    Another example. Most theories in empirical science are about possible relationships of cause and effect. But the relarionship of cause and effect is not something that can be observed.

    My error was probably to use the word “investigate”, which was ambiguous enought to allow you to play with it.”

    So, again. Science starts with facts: what can be observed. “Measures” are only made on what can be observed. I suppose that all your fancy words can apply to our interaction with facts:

    – When we gather facts and observe their properties, it can be said, I suppose, that we are “investigating” facts, and “analyzing” them. And “eveluating” them or “describing” them. And of course taking measures is part of observing facts.

    – When we build theories to explain observed facts, not all those terms apply. For example, let’s say that we hypothesize a cause and effect relationship. That is part of our theory, but we don’t take measures of the cause-effect relationship. At most, we infer it from the measures we have taken of facts. But in a wide sense building a theory can be considered an evaluation, certainly it is a form of investigation.

    I have said clearly that we can use any possible concept in our theories, provided that the purpose is to explain facts. We use the cause-effect relationship, we use complex numbers in quantum mechanics, we can in principle use the concept of God, if useful. Or of immaterial entities. That does not mean that we can measure those things, or have further information about them except for what can be reasonably inferred from facts.

    That should be clear, but I don’t know why I will not be suprised if again you don’t understand.

    Evaluation is not the gathering of facts. Collecting facts comes from observation, measurement, or investigation. Evaluation can create some facts (such as logical conclusions) but in science it all must start with observation. After that, we can evaluate. To infer is to draw a logical conclusion from observations and evaluation.

    As you like. As said, it’s not a problem about words. You want to limit “evaluation” in some, not very clear to me, way, be my guest. I will simply avoid the word with you.

    But please, note that logical conclusions are not facts. If you insist on that kind of epistemology, we cannot really communicate.

    As I have heard other ID theorists state, ID cannot observe anything about an immaterial designer or designers. I think you disagree with this.

    No. Why should I? Of course if a thing is immaterial it cannot be “observed”. The only exception is our personal consciousness, that each of us observes directly, intuitively.

    I have only said that we can use the concept of immaterial entoities in our theories, and that we can make inferences about the designer from observed facts, be he material or immaterial.

    The only thing ID attempts to do is show that there is evidence of Intelligence at work.

    Of intelligent designers.

    The effects that we observe in nature could have been produced by millions of designers, each one of which has less intelligence than a human being, but collectively create design in nature.

    I absolutely disagree. ATP synthase could never have been designed by a crowd of stupid designers. It’s the first time I hear such a silly idea.

    If you are speaking about a designer that exists outside of space and time, then we do not have any experience with that.

    I have never said that. I have said many times that the designer acts in space and time. Where he exists, I really don’t know. Have you some information about that?

    We can observe various effects, but not the entity itself.

    That’s right. Like dark energy or dark matter. As for that, we cannot even observe conscious representations in anyone else except us, but still we very much base our science and map of reality on their effects and the inference tha thy exist.

    It seemed that you disagree with this and believe instead that science can directly observe an immaterial designer (or any immaterial entity) that produces effects in reality.

    This is only your unwarranted misinterpretation. I have said many times that science can directly observe some effects and infer a designer, maybe immaterial. It’s exactly the other way round.

  160. 160
    gpuccio says:

    Silver Asiatic at #157:

    OK, I apologize too. Multiple question marks are not intended as an offense, only as an expression of true amazement. Some other statements may have been a little more “heated”, as you say. Let’s try to be more detached. 🙂

    I have just finished commenting on your statements. Please, forgive any possible question marks or tones. My purpose is always, however, to clarify.

    I am afraid that Egnor and BA are not exactly my main reference for ID theory. I always quote my main references:

    Dembski (with whom, however, I have sometimes a few problems, but whose genius and importance for ID theory cannot be overestimated)

    Behe, with whom I agree (almost) always.

    Abel, who has given a few precious intuitions, at least to me.

    Berlinsky, who has entertained me a lot with creative and funny thoughts.

    Meyer, who has done very good work about OOL and the Cambrian explosion.

    And, of course, others. Including many friends here. Let me quote at least KF and UB for the many precious contributions, but of course there are a lot more, and I hope nobody feels excluded: it would be a big work to give a coherent list.

  161. 161
    john_a_designer says:

    SA,

    Science itself rests on a number of empirically unprovable or metaphysical (philosophical) assumptions. For example:

    That we exist in a real special-temporal world– that the world (the cosmos) is not an illusion and we are not “brains in a vat” in some kind of Matrix like virtual reality.

    That the laws of nature are universal throughout time and space.

    Or that there are really causal connections between things and things, people and things. David Hume famously argued that that wasn’t self-evidently true. Indeed, in some cases it isn’t. Sometime there is correlation without causation or “just coincidence.”

    Again, notice the logic Dawkins wants us to accept. He wants us to implicitly accept his premise that that living things only have the appearance of being designed. But how do we know that premise is true? Is it self-evidently true? I think not. Why can’t it be true that living things appear to be designed for a purpose because they really have been designed for a purpose? Is that logically impossible? Metaphysically impossible? Scientifically impossible? If one cannot answer those questions then design cannot be eliminated from consideration or the discussion. Therefore, it is a legitimate inference from the empirical (scientific) evidence.

    I have said this here before, the burden of proof is on those who believe that some mindless, purposeless process can “create” a planned and purposeful (teleological) self-replicating system capable of evolving further though purposeless mindless process (at least until it “creates” something purposeful, because, according to Dawkins, living things appear to be purposeful.) Frankly, this is something our regular interlocutors consistently and persistently fail to do.

    As a theist I do not claim I can prove (at least in an absolute sense) that my world view is true. Can naturalists/ materialists prove that their world view is true? Personally I believe that all worldviews rest on unprovable assumptions. No one can prove that their world view is true. Is that true of naturalism/ materialism? If it can someone with that world view needs to step forward and provide the proof.

    As whether or not ID is science. I am skeptical of the claim that Darwinism in the macro-evolutionary sense is science or that SETI is science (what empirical evidence is there that ETI’s exist?) How does NS + RV cause macro-evolutionary change? Science, needs to answer the question of how. Just saying “oh somehow it could” with any airy wave of the hand is not a sufficient explanation. But that applies for people on both sides of the debate.

  162. 162
    Silver Asiatic says:

    SA: I have read ID researchers who have spoken about the irreducible quality of consciousness as evidence of design.
    GP: Who? Where? As far as I know, complex specified information (or complex functional information) in objects has always been considered the mark of design. Dembski, Behe, Abel, Meyer, Berlinski, and so on.

    ID and Neuroscience
    https://uncommondescent.com/intelligent-design/id-and-neuroscience/
    My good friend and colleague Jeffrey Schwartz (along with Mario Beauregard and Henry Stapp) has just published a paper in the Philosophical Transactions of the Royal Society that challenges the materialism endemic to so much of contemporary neuroscience. By contrast, it argues for the irreducibility of mind (and therefore intelligence) to material mechanisms.
    William Dembski

  163. 163
    Silver Asiatic says:

    “CSI is a reliable indicator of design” — William Dembski
    “it is CSI on which David Chalmers hopes to base a comprehensive theory of human consciousness.” — William Dembski

    https://www.asa3.org/ASA/PSCF/1997/PSCF9-97Dembski.html

  164. 164
    Silver Asiatic says:

    JAD

    Science itself rests on a number of empirically unprovable or metaphysical (philosophical) assumptions.

    Agreed. Science does not stand alone as a self-evident process. It is dependent upon philosophical assumptions. Dawkins has his own assumptions. If he said, for example, that science can only accept material causes for all of reality, that is just his philosophical view. If ID says that science can accept immaterial causes, then it is different science.
    A person might also say that science must accept that God exists. That’s a philosophical starting point.
    In the end, people who do science are carrying out a philosophical project.
    If a person is willing to do enough philosophy to carry out the project of science, I believe they have the responsibility to carry the philosophy farther than science. The philosophical questions go beyond simply what causes we can accept.
    But people like Dawkins and others do not accept this. They think that science simply has one set of rules, and they claim to be the ones following the true scientific rules, as if those rules always existed.
    Some IDists have tried to convince the world that ID is just following the normal, accepted rules of science and that people do not need to accept a new kind of science in order to accept ID conclusions.
    Others will say that mainstream science itself is incorrect and that people need a different kind of science in order to understand ID.
    I think ID will even work with Dawkins’ version of science. He may say that “only material causes” can be considered. So, we observe intelligence and so some material cause created the intelligent output? The question for Dawkins would be what material cause creates intelligent outputs?

  165. 165
    gpuccio says:

    Silver Asiatic:

    Theory of consciousness is a fascinating issue. A philosophical issue which, like all philosophical issues, can certainly use some scientific findings. I have my ideas about theory of consciousness, and sometimes I have discussed some of them here. But ID is not a theory of consciousness.

    But it is true that ID is the first scientifc way to detect something that only conciousness can do: generate complex functional information. In this sense, the results of ID are certainly important to any theory of consciousness. The simple fact that there is something that only consciousness can do, and that there is a scientific way to detect it, is certainly important. It also tells us that consciousness can do things that no non comscious algorithm, however intelligent or complex, can do.

    I usually say that some properties of conscious experiences, like the experience of understanding meaning and of feeling purposes, are the best rationale to explain why conscious agents can generate complex functional information while non conscious systems cannot. But again, ID is not a theory of consciousness.

    All spheres of human cognition are interrelated: religion, philosophy, science, art, everything. But each of those things a specificity.

    ID theory will probably be, in the future, part of a theory of consciousness, if and when we can develop a scientific approach to it. But at present it is only a theory about how to detect a specific product of consciousness, complex functional information, in material objects.

    Jeffrey Schwartz and Mario Beauregard are neuroscientists who have dealed brilliantly with the problem of consciousness. the spiritual brain is a very good book. Chalmers is a philosopher who has given us a precious intuition with his concept of the hard problem of consciousness.

    None of those approaches, however, is even near to understand anything about the “origin” of cosnciousness. Least of all ID.

    I am absolutely certain that consciousness is in essence immaterial. But that is my philosophical conviction. the best scienctific evidence that I can imagine about that are NDEs, and they are not related to ID theory.

  166. 166
    john_a_designer says:

    Gp @ #156,

    To be fair to Dawkins, I don’t think that he assumes that “design is impossible”. On the contrary, he is one of the few who admit that design could be a scientific explanation. He just does not accept it as a valid scientific explanation. That is epistemologically correct, even if of course completely wrong in the essence.

    Indeed, here is another stunning admission by Richard Dawkins:

    https://www.youtube.com/watch?v=BoncJBrrdQ8

    Dawkins concedes that (because nobody knows) first life on earth could have been intelligently designed– as long as it was an ET intelligence not an eternally existing transcendent Mind (God.)

    Of course other atheists have admitted the same thing. See the following article which refers to a paper written by Francis Crick and British chemist Leslie Orgel.

    https://blogs.scientificamerican.com/guest-blog/the-origins-of-directed-panspermia/

    I believe it was Crick and Orgel who coined the term directed panspermia.

    To be fair I think Dawkins later tried to walk back his position. Maybe Crick and Orgel did as well. But the point remains, until you prove how life first originated by mindless, purposeless “natural causes” intelligent design is a logical possibility– a very viable possibility.

    Ironically, in the Ben Stein interview Dawkins said that if life were intelligently designed (by space aliens) the scientific research may be able to discover their signature. Didn’t someone write a book about the origin of life with the word signature in the title? Who was that? I wonder if he picked up the idea from Dawkins. Does anyone know?

    Bonus question: Ben Stein was made famous by one word. Does anyone know what that one word was? Anyone?

  167. 167
    Silver Asiatic says:

    GP

    But it is true that ID is the first scientifc way to detect something that only conciousness can do: generate complex functional information.

    What I have been doing is questioning what ID can or cannot do and even questioning scientific assumptions along the lines of the ideas you’ve posted. You have explained your views on design and how consciousness is involved and even on whether the actions of conscious mind can be considered “creative acts”, as well as how we evaluate immaterial entities.
    I have always argued that ID is a scientific project but I could reconsider that. ID does not need to be scientific to have value. I’ll respond to JAD in the next post with some thoughts that I question myself on and just respond to his feedback, but your definitions of science and ID will also be included in my considerations.

  168. 168
    Silver Asiatic says:

    JAD

    Bonus question: Ben Stein was made famous by one word. Does anyone know what that one word was? Anyone?

    The kid in the movie – can’t remember his name. Travis?

    As whether or not ID is science. I am skeptical of the claim that Darwinism in the macro-evolutionary sense is science or that SETI is science (what empirical evidence is there that ETI’s exist?) How does NS + RV cause macro-evolutionary change? Science, needs to answer the question of how. Just saying “oh somehow it could” with any airy wave of the hand is not a sufficient explanation. But that applies for people on both sides of the debate.

    It’s a great point.
    I have argued for many years that ID is science. By that, I mean “the same science as Dawkins uses”. It is my belief that 90% of the scientists agree with Dawkins’ view of science – it’s the mainstream view.
    I also believed that ID was a subterfuge – an apologetic for the existence of God. I don’t see anything wrong with that.
    ID was going to use the exact same science that Dawkins uses, and then show that there is evidence of intelligent design. The method for doing that is to show that proposed natural mechanisms (RM + NS) cannot produce the observed effects. Intelligence can produce them, so Intelligence is the best, most probable inference.

    However, what I learned from many IDists over the years (GP pointed it out to me just previously) is that to accept ID, one needs a different science than what Dawkins uses. I find that to be a big problem. If, in order to accept ID, a person first needs “a different kind of science” than the normal, mainstream science of Dawkins, then there’s no reason to start talking about ID first. Instead, one should start to convince everyone that a different kind of science should be used throughout the world.

    Because for me, Dawkins’ version of science is fine. He just does what mainstream science does. They look at observations, collect data, propose causes. The first problem is that Dawkins’ mechanisms cannot produce the observed effects. So, even on his own terms, the science fails.

    However, when Dawkins says that science can only accept material causes, that doesn’t make a lot of sense – as you have pointed out. Additionally, he’s talking about a philosophical view.

    In that case, it is one philosophy versus another. The philosophy of ID vs Dawkins’ philosophical view. We can’t speak about science at that point.

    So, I hate to admit it because so many of my opponents over the years said this and I disagreed, but I do now accept that ID has always been a game to introduce God into the closed world of materialistic science. The difference in my view now is that I don’t see anything wrong with that game. Why not try to put God in science? What’s wrong with that? If the only way to do this is to trick materialist scientists using their own words, concepts and reasoning, again – what’s wrong with that? Dishonest? I don’t think so. The motive for using a certain methodology (ID in this case) has no bearing on what the methodology shows. In the same way, it doesn’t matter what belief an evolutionist has, they have to show that the observations can be explained from their theory.

    If, however, ID requires an entirely different science and philosophical view (that is possible also), then I don’t really see much need for the discussion on whether ID is a science or not. Why not just start with the idea that God exists, and then use ID observations to support that view? I don’t see why that is a problem. If IDists are saying “we don’t accept mainstream science”, then why appeal to mainstream science for credibility? Just create your own ID-science. But for me, I’m a religious believer with philosophical reasons for believing in God (as the best inference from facts and far more rational than atheism) so instead of trying to prove to everyone that we need a new science, I’d just start with God and then do science from that basis.

    That’s the way it would be if ID is not science.
    If, however, ID is science, for me that means “ID is the same science that Dawkins and all mainstream scientists use”. The inferences from ID can be shown using exactly the same data and observations that Dawkins uses.
    For me, that would give ID a lot more value.

  169. 169
    john_a_designer says:

    SA,

    [The following is something I posted on UD before which defines my position about I.D. Please note, however, I see it nothing more than just a personal opinion and I am not stating it in an attempt to change anyone’s mind. Indeed it remains tentative and subject to change but over the years I have seen no reason to change it.]

    Even though I think I.D. provokes some interesting questions I am actually not an I.D. proponent in the same sense that several other commenters here are. I don’t think I.D. is “science” (the empirical study of the natural world) any more than naturalism/materialism is science. So questions from materialists, like “who designed the designer,” are not scientific questions; they are philosophical and/or theological questions. However, many of the questions have philosophical/theological answers. For example, the theist would answer the question, “who designed the designer,” by arguing that the designer (God) always existed. The materialist can’t honestly reject that explanation because historically materialism has believed that the universe has always existed. Presently they are trying to shoehorn the multiverse into the discussion to get around the problem of the Big-Bang. Of course, this is a problem because there is absolutely no scientific evidence for the existence of a multiverse. In other words, it is just an arbitrary ad hoc explanation used in an attempt to try to wiggle out of a legitimate philosophical question.

    However, this is not to say that science can’t provoke some important philosophical and theological questions– questions which at present can’t be answered scientifically.

    For example:

    Scientifically it appears the universe is about 14.5 billion years old. Who or what caused the universe to come into existence? If it was “a what”– just natural causes– how do we know that?

    Why does the universe appear to exhibit teleology, or design and purpose? In other words, what is the explanation for the universes so-called fine tuning?

    How did chemistry create the code in DNA or RNA?

    How dose mindless matter “create” consciousness and mind? If consciousness and mind are “just an appearance” how do we know that?

    These are questions that arise out of science which are philosophical and/or theological questions. Is it possible that they could have scientific explanations? Possibly. But even if someday some of them could be answered scientifically that doesn’t make them at present illegitimate philosophical/theological questions, because we don’t know if they have, or ever could have, scientific answers.

    As far as philosophical naturalism goes, here is a summary of reasons why I do not think philosophical naturalism is sufficient as a world view.

    Naturalism (or materialism) cannot provide:

    *1. An ultimate explanation for existence. Why does anything at all exist?

    *2. An explanation for the nature of existence. Why does the universe appear to exhibit teleology, or Design and Purpose?

    *3. A sufficient foundation for truth, knowledge and meaning.

    *4. A sufficient foundation for moral values and obligations.

    *5. An explanation for what Aristotle called form and what we call information. Specifically how did chemistry create the code in DNA or RNA?

    *6. An explanation for mind and consciousness. How dose mindless matter “create” consciousness and mind? If consciousness and mind are just an appearance how do we know that?

    *7. An explanation for the apparently innate belief in the spiritual– a belief in God or gods, and the desire for immortality and transcendence.

    Of course the atheistic naturalist will dismiss numbers 6 or 7 as illusions and make up a just-so story to explain them away. But how do they know they are illusions? The truth is they really don’t know and they certainly cannot prove that they are. They just believe. How ironic to be an atheist/naturalist/ materialist you must believe a lot– well actually everything– on the basis of faith.

  170. 170
    PeterA says:

    JAD @169:

    “As far as philosophical naturalism goes, here is a summary of reasons why I do not think philosophical naturalism is insufficient as a world view”

    “do not think” “is insufficient”

    Is that the combination you wanted to express?

    I’m not sure if I understood it.

  171. 171
    gpuccio says:

    John_a_designer at #166:

    I agree with what you say about Dawkins. He is probably honest enougheven if completely wrong, but he is really obsessed by his antireligious crusade.

    The book you mention is “Signature in the Cell” by Stephen Meyer.

  172. 172
    gpuccio says:

    John_a_designer at #169:

    I agree with almost everything that you say, except of course that ID is not science. For me, it is science without any doubt. It has, of course, important philosophical implications, like many other important scientific theories (Big Bang, Quantum mechanics, Relativity, Dark energy, and so on).

  173. 173
    john_a_designer says:

    Peter A

    Final edit:

    “As far as philosophical naturalism goes, here is a summary of reasons why I do not think philosophical naturalism is sufficient as a world view.”

    That is what I meant to say and luckily corrected before the edit function timed out. Hopefully that makes sense now.

  174. 174
    john_a_designer says:

    Just to clarify, it’s not my view that ID doesn’t raise some very legitimate scientific questions. Behe’s discovery of irreducible complexity (IC) raises some important questions.

    For example, in his book Darwin’s Black Box, Michael Behe asks,

    “Might there be an as yet undiscovered natural process that would explain biochemical complexity? No one would be foolish enough to categorically deny the possibility. Nonetheless we can say that if there is such a process, no one has a clue how it would work. Further it would go against all human experience, like postulating that a natural process might explain computers… In the face of the massive evidence we do have for biochemical design, ignoring the evidence in the name of a phantom process would be to play the role of detective who ignore the elephant.” (p. 203-204)

    Basically Behe is asking, if biochemical complexity (irreducible complexity) evolved by some natural process x, how did it evolve? That is a perfectly legitimate scientific question. Notice that even though in DBB Behe was criticizing Neo-Darwinism he is not ruling out a priori some other mindless natural evolutionary process, “x”, might be able to explain IC.

    Behe is simply claiming that at the present there is no known natural process that can explain how irreducibly complex mechanisms and processes originated. If he and other ID’ist are categorically wrong then our critics need to provide the step-by-step-by-step empirical explanation of how they originated, not just speculation and wishful thinking. Unfortunately our regular interlocutors seem to only be able to provide the latter not the former.

    Behe made another point which is worth keeping in mind.

    “In the abstract, it might be tempting to imagine that irreducible complexity simply requires multiple simultaneous mutations – that evolution might be far chancier than we thought, but still possible. Such an appeal to brute luck can never be refuted… Luck is metaphysical speculation; scientific explanations invoke causes.”

    In other words, a strongly held metaphysical belief is not a scientific explanation.

    So why does Neo-Darwinism persist? I believe it is because of its a-priori ideological or philosophical fit with naturalistic or materialistic world views. Human being are hard wired to believe in something– anything to explain or make some sense of our existence. Unfortunately we also have a strong tendency to believe in a lot of untrue things.

    On the other hand, if IC is the result of design, it has to answer the question of how was the design instantiated. If ID wants to have a place at the table it has to find a way to answer questions like that. Once again, one of the primary things science is about is answering the “how” questions.

    Or as another example, ID’ists argue that the so-called Cambrian explosion can be better explained by an infusion of design. Okay that is possible. (Of course, I whole heartedly agree because I am very sympathetic to the concept of ID.) But how was the design infused to cause a sudden diversification of body plans? Did the “designer” tinker with the genomes of simpler life forms or were they specially created as some creationists would argue? (The so-called interventionist view.) Or were the new body plans somehow pre-programmed into their progenitors genomes (so-called front loading.) How do you begin to answer such questions that have happened in the distant past? At least the Neo-Darwinists have the pretense of an explanation. Can we get them to abandon their theory by declaring it impossible? Isn’t it at least possible, as Behe acknowledges, that there could be some other unknown natural explanation “x.”

    Is saying something is metaphysically possible a scientific explanation? The goal of science is to find some kind of provisional proof or compelling evidence. Why for example was the Large Hadron Collider built at the cost of billions of dollars (how much was it in euros?) Obviously it was because in science mere possibility is not the end of the line. The ultimate quest of science is truth and knowledge. Of course, we need to concede that science will never be able to explain everything.

  175. 175
    PeterA says:

    JAD @173,

    Yes, that makes much sense.

  176. 176
    PavelU says:

    OLV @139:

    The paper you cited doesn’t seem to support Behe’s polar bear argument.

  177. 177
    john_a_designer says:

    A few years ago here at UD one of our regular interlocutors who was arguing with me about the ID explanation for origin of life pointed out:

    the inference from that evidence to intelligence being involved is really indirect. You don’t have any other evidence for the existence of an intelligence during the times it would need to be around.

    I responded,

    “We have absolutely no evidence as to how first self-replicating living cell originated abiogenetically (from non-life). So following your arbitrarily made-up standard that’s not a logical possibility, so we shouldn’t even consider it… As the saying goes, ‘sauce for the goose is sauce for the gander.’”

    When you argue that life originated by some “mindless natural process,” that is not an explanation how. Life is not presently coming into existence by abiogenetically, so if such process existed in the past it no longer exists in the present. Therefore you are committing the same error which you accuse ID’ists of committing. That’s a double standard, is it not?

    This kind of reasoning on the part of materialists also reveals that they don’t really have any strong arguments based on reason, logic and the evidence. If they do, why are they holding back?

  178. 178
    gpuccio says:

    John_a_designer at #177:

    Exactly!

    That’s why I say that ID is fully scientific.

    Science has the duty to make good inferences from facts, without any a priori commitment to any specific worldview.

    That reality must behave according to our religious convictions is an a priori wordview. That’s why, as I have explained, I strive a lot to avoid, as much as humanly possible, any influence of my philosophy or theology on my scientific reasonings.

    That realy must behave according to our atheistic or materialistic convictions is an a priori wordview. That’s why our knid interlocutors should strive a lot to avoid, as much as humanly possible, any influence of their philosophy or atheology on their scientific reasonings.

    The simple fact is that ID theory, reasoning from facts in a perfectly scientific way, infers a process of design for the origin of biological objects.

    Now, our interlocutors can debate if our arguments are right or wrong from a scientific point of view. That’s part of the scientific debate.

    But the simple idea that we have no other evidence of the existence of conscious agent, for example, at the time of OOL is not enough. Because we have no evidence of the contrary, too.

    The simple idea that non physical conscious agents cannot exist is not enough, because it is only a specific philosophical conviction. Od course non physical conscious agents can exist. We don’t even know what consciousness is, least of all how it works and what is necessary for its existence.

    My point is: the design inference is real and perfectly scientific. All arguments about things that we don’t know are no reason to ignore that scientific inference. They are certainly valod reasons to pursue any further scientific investigation to increase our knowledge about those things. That’s perfectly legitimate.

    For example, I am convinced that our rapidly growing understanding of biology will certainly help to understand how the design was implemented at various times.

    And, even if ID is not a theory of consciousness, there is no doubt that future theories of consciousness can integrate ID and its results. For example, much can be done to understand better if a quantum interface between conscious representations and physical events is working in us humans, as many have proposed and as I believe. That same model could be applied to biological design in natural history.

    And of course, philosophy, physics, biophysics and what else can certainly contribute to a better understanding of consciousness, and of its role in reality.

    A better study of common events like NDEs can certainly contribute to understand what consciousness is.

    I would like to repeat hear a statment that I have made in the discussion with Silver Asiatic, that sums up well my position about science:

    Science can, and must, investigate, everything that can be observed in reality. And, from observed things, infer ever better models of reality. Given that very broad definition, there are no other limitations.

  179. 179
    Sven Mil says:

    Interesting conversation here,

    ‘sigma factor “is a bacterial transcription initiation factor that enables specific binding of RNA polymerase (RNAP) to gene promoters. It is homologous to archaeal transcription factor B and to eukaryotic factor TFIIB”.’

    “I have blasted sigma 70 from E. coli with human TFIIB and found no detectable homology (E value 1.4). So, there seems to be little conservation here.”

    Is there an explanation for this disagreement?

  180. 180
    gpuccio says:

    Sven Mil:

    “Is there an explanation for this disagreement?”

    Thank you for the comment and welcome to the discussion.

    Thank you also for addressing an interesting and specific technical point.

    It is not really a disagreement, probably only a different perspective.

    Reseacrhers interested in possible homologies (IOWs, in finding orthologs or paralogs for some gene) often use very sensitive algorithms. They find homologies that are often very weak, or maybe not real. Or they may look at structural homologies, that are not evident at the sequence level.

    My point of view is different. In order to debate ID in biology, I am only interested in definite homologies, possibly very high homologies conserved for a long evolutionary time. My aim is specificity, not sensitivity. Moreover, as I accept CD (as discussed in detail in this thread) I have no interest in denying possible weak homologies. I just ignore them, because they are not relevant to my argument.

    That’s why I always measure homology differences, not absolute homologies. I want to find information jumps at definite evolutionari times.

    Another possibility for the different result is that I have not blasted the right protein form. For brevity (it was nort really an important aspect of my discussion) I have not blaste all possible forms of sigma factors against eukaryotic factor TFIIB. I have just blasted sigma 70 from E. coli. Maybe a more complete search could detect some higher homology.

    OK, as you have raised the question, I have just checked the literature reference in the Wikipedia page:

    The sigma enigma: Bacterial sigma factors, archaeal TFB and eukaryotic TFIIB are homologs

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4581349/

    Abstract
    Structural comparisons of initiating RNA polymerase complexes and structure-based amino acid sequence alignments of general transcription initiation factors (eukaryotic TFIIB, archaeal TFB and bacterial sigma factors) show that these proteins are homologs. TFIIB and TFB each have two-five-helix cyclin-like repeats (CLRs) that include a C-terminal helix-turn-helix (HTH) motif (CLR/HTH domains). Four homologous HTH motifs are present in bacterial sigma factors that are relics of CLR/HTH domains. Sequence similarities clarify models for sigma factor and TFB/TFIIB evolution and function and suggest models for promoter evolution. Commitment to alternate modes for transcription initiation appears to be a major driver of the divergence of bacteria and archaea.

    As you can see from the abstract, they took into consideration structure similarities, not only sequence alignments.

    Maybe you can have a look at the whole article. Now I don’t think I have the time.

  181. 181
    gpuccio says:

    To all (specially UB):

    One interesting aspect of the NF-kB system discussed here is that, IMO, it can be seen as a polymorphic semiotic system.

    Let’s consider the core of the system: the NF-kB dimers in the cytoplasm, their inhibition by IkB proteins, and their activation by either the canonical or non canonical pathway, with the cooperation of the ubiquitin system. IOWs the central part of the system.

    This part is certainly not simple, and has its articulations, for example the different kinds of dimers that can be activated. However, when looking at the whole system, this part is relatively simple, and it uses a limited number of proteins. In a sense, we can say that there is a basic mechanism that works here, with some important variations.

    Well, like in all the many pathways that carry a signal from the membrane to the nucleus, even in this case we can consider the intermediate pathway (the central core just described) as a semiotic structure: indeed, it connects symbolically a signal to a response. The signal and the response have no direct biochemical association: they are separated, they do not interact directly, there is no direct biochemical law that derives the response from the signal.

    It’s the specific configuration of the central core of the pathway that translates the signal, semiotically coupling it to the response. So, that core can be considered as a semiotic operator that given the operand (the signal) produces the result (the response at nuclear level).

    But in this specific case there is something more: the operator is able to connect multiple operands to multiple specific results, using an essential bulk of tools. IOWs, the NF-kB system behaves as a multiple semiotic operator, or if we want as a polymorphic semiotic operator.

    Now, that is not an exclusive property of this system. Many membrane-nucleus pathways behave, in some measure, in the same way. Biological signals and their associations are never simple and clear-cut.

    But I would say that in the NF-kB system this polymorphic attitude reaches its apotheosis.

    There are many reasons for that:

    a) The system is practically universal: it works in almos all types of cells in the organism

    b) There is a real multitude of signals and receptors, of very different types. Suffice it to mention cytokine stimuli (TNF, IL1), bacterial or viral components (LPS), specific antigen recognition (BCR, TCR). Moreover, each of these stimuli is connected to the central core by a specific, often very complex, pathway (see the CBM signalosome, for example).

    c) There is a real multitude of responses, in different cells and in the same cell type in different contexts. Even if most of them are in some way related to inflammation, innate immune response or adaptive immune response, there are also responses about cell differentiation (neurons). In B and T cells, for example, the system is invlolved both in the differentiation of B and T cells and in the immune response of mature B and T cells after antigen recognition.

    This is a really amazing flexibility and polymorphism. A complex semiotic system that implements, with remarkable efficiency, a lot of different functions. This is engineering and programming of the highest quality.

  182. 182
    Silver Asiatic says:

    GP

    Science has the duty to make good inferences from facts, without any a priori commitment to any specific worldview.

    As I was discussing with JAD, I have always argued that ID is a scientific project. But I am tending now to see it as a philosophical proposition. Your statement above is a philosophical view. You are giving a framework for what you think science should be.
    But science cannot define itself or create its own limits. Science cannot even tell us what “matter” is or what it means for something to be “immaterial”. Those are philosophical concepts. Science also cannot tell us what causes are acceptable. Science cannot tell us that it should not have a commitment to a worldview.
    So, for example, if I wanted to do “my own science”, I could establish rules that I want. Nobody can stop me from that.
    I could have a rule: “For any observation that cannot be explained by known natural causes, we must conclude that God directly created what we observed”.
    There is nothing wrong with that if that is “my science”. Of course, if I want to communicate I would have to convince people to believe in my philosophy of science. But that would have nothing to do with science itself, but rather my efforts to convince people of my philosophical view.
    Now, we could have what we call “Dawkins Science”. I believe that’s what a majority of biologists accept today. Again, it is perfectly legitimate. Dawkins and all others like him will claim “science can only accept natural causes, or material causes”.
    So, they establish rules. Science cannot tell us if those rules are correct or not. It is only philosophy that says it.
    Then ID comes along, and IDists will say “ID is science”.
    Here is where I disagree.
    Whenever we make a sweeping statement about “science” we are talking about “the consensus”.
    If Dawkins is the consensus, then to claim “ID is science” means that it is perfectly compatible with Dawkins’ science.
    If, however, the claim “ID is science” means “you have to accept our version of science to accept ID”, then that’s a mistake.
    Again, to claim something “is science” usually means it is the consensus definition of science.
    To redefine science in any way one wants to, is not a scientific project. It is a philosophical project.
    If ID requires a specific kind of science that allows for non-natural causes, for example, then I would not call ID a scientific project.

    With that, even if science accepted non-natural causes, I would still consider ID to be philosophical. ID uses scientific data, but the conclusions drawn are non-scientific. Only if ID stopped by stating “this is evidence of intelligence” – that would be science. But once the conversation moves to the idea that “where there is intelligence, there must be an intelligent designer” – that is philosophical. Science cannot even define what intelligence is. Those definitions are part of the rules of science that come from a philosophical view.
    For example, there could be a pantheistic view that believes that all intelligence emerges from a universal mind which is present in all of reality. So, evidence of intelligence would not mean that there is an Intelligent Designer. It would only mean that the intelligence came from the spirit of the universe which is an impersonal spiritual force and is not a “designer” in that sense.

  183. 183
    Silver Asiatic says:

    GP & JAD
    Here is JAD’s comment on the topic of ID as science:

    Science itself rests on a number of empirically unprovable or metaphysical (philosophical) assumptions. For example:
    That we exist in a real special-temporal world– that the world (the cosmos) is not an illusion and we are not “brains in a vat” in some kind of Matrix like virtual reality.

    That is right. All science requires an a priori metaphysical commitment. “Mainstream science” has accepted one particular view. But nobody can say that view, or any view is “true science”. It comes down to the philosophical view of “what is reality”? Are there real distinctions between things or are those distinctions arbitrary? Western philosophy tells us one thing, but there are other philosophical views.

    Again, if ID is saying that “Dawkins is using the wrong kind of science”, then that’s a philosophical debate about what science should be.

    For me if ID can be fully compatible with the science that Dawkins uses, then that’s powerful. In that case, I think it would be more reasonable to say that “ID is science” since it is using the exact same understanding of science that people like Dawkins use.

  184. 184
    gpuccio says:

    Silver Asiatic:

    OK, I disagree with you about many things. Not all.

    Let’s see if I can explain my position.

    You quote my statement:

    “Science has the duty to make good inferences from facts, without any a priori commitment to any specific worldview.”

    And then you say that this is a philosophical view. And I absolutely agree.

    That was clearly a statement of my position about philosophy of science. Philosophy of science is phisolophy.

    I usually don’t discuss my philosophy here, except of course my philosophy of science, which is absolutely pertinent to any scientific discussion. So yes, when I say that science has the duty to make good inferences from facts, without any a priori commitment to any specific worldview, I am making a statement about philosophy of science.

    I also absolutely agree that “science cannot define itself or create its own limits”. It’s philosophy of science that must do that.

    Where I absolutely discgree with you is in the apparemt idea that philosophy of science is a completely subjective thing, and that everyone can “make his own rules”. That is completely untrue. Philosophy is not subjective, as science is not objective. They are different, but both are rather objective, with many subjective aspects.

    There is good philosophy and bad philosophy, as there is good science and bad science. And, of course, there is bad philosophy of science.

    You say: “So, for example, if I wanted to do “my own science”, I could establish rules that I want. Nobody can stop me from that.”

    It is true that nobody can stop you, but it is equally true that it can be bad science, and everyone has a right to judge for himself if it is good science or bad science.

    The same is true for philosophy of science.

    The really unbearable part of you discourse is when you equate science to consensus. This is a good example of bad philosophy of science. For me, of course. And for all those who waht to agree. There is no need that we are the majority. There is no nedd for consensus.

    Good science and good philosophy of science must be judged in the sacred intimacy of our own cosnciousness. we are fully responsible for our judgement, and we have the privilege and the duty to defend that judgement and share it with others, whether they agree or not, whether there is consensus about it or it is shared only by a minority.

    Because in the end truth is the measure of good science and of good philosophy. Nothing else.

    Consensus is only an hisotrical accident. Sometimes there is consensus for good thingsm many times for bad things. Consensus for bad science does not make it good science. Ever.

    Then you insist:

    “If ID requires a specific kind of science that allows for non-natural causes, for example, then I would not call ID a scientific project.”

    ID requires nothing like that. ID infers a process of design. A process of design requires a designer. There is nothing non natural in that. Therefore ID is science.

    Moreover, I could show, as I have done many times, the the word “natural” is wholly misleading. In the end, it just mean “what we accept according to our present worldview”. In that sense, any form of naturalism is the end of true science. Naturalism is a good example of bad philosophy of science.

    And I know, that is not the consensus. I know that very well. But it is not “my own rule”. It is a strong philosophical belief, that I am ready to defend and share with anyone, and to which I will remain loyal to the end of my days, unlessof course I do not find some day some principle that is even better.

    Just a final note. You say: ” Science cannot even tell us what “matter” is or what it means for something to be “immaterial”. Those are philosophical concepts.”

    Correct. And I don’t think that even philosophy has good answers, at present, about those things. Indeed, I think that “matter” and “immaterial” are vague concepts.

    But science can be more precise. For example, science can define if sometning has mass or not. Some entities in reality have mass, others don’t. This is a scientific statement.

    in our discussion, I did not use the word “immaterial”. That word was introduced by you. I just stated, answering your question, that it seemed reasonable that the biological designer(s) did not have a physical body like us, because otherwise there should be some observable trace of that fact. This implies no sophisticated philosophical theory about what matter is. I suggested that, as we know that matter exists but we don’t know what it is, it is not unreasonable to think that it can exist without a physical body like ours. Not only it is not unreasonable, but indeed most people have believed exactly that for millennia, and even today, probably, most people believe that.

    I could add that observable facts like the reports of NDEs strongly suggest that hypothesis.

    True of false that it may be, the hypothesis that cosnciousness is not always linked to a physical body like ours is a reasonable idea. There is no reason at all to consider that idea “not natural” or to ban it a priori from any scientific theory or scenario. To do that is to do bad science and bad philosophy of science, driven by a personal philosophical committment that has no right to be imposed on others.

  185. 185
    gpuccio says:

    Silver Asiatic:

    “For me if ID can be fully compatible with the science that Dawkins uses, then that’s powerful.”

    But ID is fully compatible with the science that Dawkins uses. It’s Dawkins who uses that science badly and defends wrong theories. It’s Dawkins who refutes the good theories of ID because of ideological prejudices. We cannot do nothing about that. It’s his presonal choice, and he is a free individual. But there is no reason at all to be influenced or conditioned by his bad scientific and philosophical behaviour.

  186. 186
    Silver Asiatic says:

    GP

    I think you’re being inconsistent. That’s one thing I’m trying to point out. You agree that your statement about science (and therefore your foundation for ID) is a philosophical position. However, you often state something like this:

    That’s why, as I have explained, I strive a lot to avoid, as much as humanly possible, any influence of my philosophy or theology on my scientific reasonings.

    But it is simply not possible to avoid your philosophical view since that view is the basis of all your understanding of science and your scientific reasoning. In fact, I would say it’s unreasonable to insist that you’re trying to avoid your philosophical view. Why would you do that? Your philosophy is the most important aspect of your science. Why conceal it as if you could do science without a philosophical starting point?

    At the risk of irritating you, I feel the need to repeat something continually through my response – and that is, almost everything you said was a philosophical discourse. I have been discussing on one of KF’s threads the objective foundation of philosophy, but after that (which is minimal) philosophy is almost entirely subjective. We can freely choose among options.

    Philosophy is not subjective, as science is not objective. They are different, but both are rather objective, with many subjective aspects.

    I disagree here and I offered a long explanation in debating with atheists on KF’s most recent thread. The only objective thing about philosophy is the starting point – that truth has a greater value than falsehood. We cannot affirm a value for falsehood. But after that, even the first principles of reason are not entirely objective. They must be chosen, for a reason. A person must decide to think rationally. For reasons of virtue which are inherent in the understanding of truth, we have an obligation to use reason. But this obligation is a matter of choice.

    It is true that nobody can stop you, but it is equally true that it can be bad science, and everyone has a right to judge for himself if it is good science or bad science.

    My repeated phrase here: That’s a philosophical view. Secondly, you are appealing to consensus “everyone can judge”. There are some cultures that forbid a Western approach to science. Their consensus will say that “mainstream science” is bad science. They have different goals and purposes in life. I think of indigenous cultures, for example, or some religions where they approach science differently.

    Good science and good philosophy of science must be judged in the sacred intimacy of our own cosnciousness. we are fully responsible for our judgement, and we have the privilege and the duty to defend that judgement and share it with others, whether they agree or not, whether there is consensus about it or it is shared only by a minority.
    Because in the end truth is the measure of good science and of good philosophy. Nothing else.

    In this case, truth follows from first principles. Science is not an arbiter of truth, it is only a method that follows from philosophy in order to gain understanding, for a reason. If a science follows logically from its first principles, then it is good science. I gave an example of a different kind of science where I could say that God is a cause. Or we could talk about Creation Science where the Bible establishes rules for science. Those are different first principles – different philosophical starting points. Creationism is perfectly legitimate philosophy and if science follows from it logically, then the science is “good science”. We may have a reason to reject Creationist philosophy but that cannot be done on an entirely objective basis. We decide based on the priority we give to certain values. We want something, so we want a science that supports what we want. But people can want different things.

    ID requires nothing like that. ID infers a process of design. A process of design requires a designer. There is nothing non natural in that. Therefore ID is science.

    Again, you offer your philosophical view. In your view, a process of design requires a designer. That is philosophy. If a person accepts your philosophy, then they can accept your ID science. I think the more usual statement of ID is that “we can observe evidence of intelligence” in various things. What I have not seen is that “all intelligent outputs require a designer”. That is a philosophical statement, not a scientific one. Science cannot establish that all intelligence necessarily comes from “a designer” or even what the term “a designer” means in this context. All science can do is say that something “looks like it came from a source that we have already classified as ‘intelligence'”. If that source is “a designer”, we do not know.

    Consensus is only an hisotrical accident. Sometimes there is consensus for good things many times for bad things. Consensus for bad science does not make it good science. Ever.

    Again, these are philosophical concepts. Even to judge good science versus bad science requires a correlation with philosophical starting points. Again, there is no such thing as “good science” as if “science” exists as an independent agent. Science is a function of philosophical principles. If the science aligns with the principles, then it is coherent and rational (but even that is not required). But it is impossible to judge if science is good or bad without first accepting a philosophical basis. The idea that only material causes can be accepted in science is a perfectly valid limitation. To disagree with it and prefer another definition is a philosophical debate and it will come down to “what do we want to achieve with science”? There is nothing objective about that. Science is a tool used for a purpose and there is nothing that says “science must only have this purpose and no other”. People choose one philosophy of science or another. There is no good or bad. There can be contradictory or irrational application of science — where science conflicts with the stated philosophy. For example, if Dawkins said “science can only accept material causes” and then said later that “science has indicated that a multiverse exists outside of time, space and matter” – that would be contradictory. We could call that “bad science” because it is irrational. But even there, a person is not required, necessarily, to be entirely rational in all aspects of life. We are required to be honest and to tell the truth. But if Dawkins said, that he makes an exception for a multiverse, his science remains just as “good” as any. Science is not absolute truth. It’s a collection of rules used for measurement, classification, experiment to arrive at understanding within a certain context.

    Moreover, I could show, as I have done many times, the the word “natural” is wholly misleading. In the end, it just mean “what we accept according to our present worldview”. In that sense, any form of naturalism is the end of true science. Naturalism is a good example of bad philosophy of science.

    Again, this is entirely a philosophical view. There is nothing wrong with a science that says “we only accept what accords with our worldview”. That’s a philosophical starting point. People may have a very good reason for believing that. Or not. So, all of their science will be “natural” in that sense. Again, there is no such thing as “true science”. You are not the arbiter of such a thing. Even to say that “all science must follow strictly logical processes” is a philosophical bias. There can be scientific philosophies that accept non-logical conclusions and various paradoxical understandings.

    And I know, that is not the consensus. I know that very well. But it is not “my own rule”. It is a strong philosophical belief, that I am ready to defend and share with anyone, and to which I will remain loyal to the end of my days, unlessof course I do not find some day some principle that is even better.

    When it say that it is “your own rule” I mean it is a rule that you have chosen to accept. You could have chosen another, like the consensus view. That is what I would prefer for ID, that it accept the consensus view on what “natural” means and basically all the consensus rules of science. I would not like to have to say that “ID requires a different understanding of terms and of science, than the consensus does”. But even if not, ID researchers are free to have their own philosophical starting points and defend them, as you would do. But as I said, I think the only aspect of philosophy that we are compelled to accept is the proto-first principles. Even there, a person must accept that thinking rationally is a duty. As I said, there can be philosophical systems that do not hold logic, analysis, and rational thought as the highest virtue. There can be other values more important to human life which would leave rational thought as a secondary value, and therefore not absolutely required in all cases. So, a contradictory scientific result would not be a problem in that philosophical view.

    True of false that it may be, the hypothesis that cosnciousness is not always linked to a physical body like ours is a reasonable idea.

    Yes, exactly. Science can tell us nothing about this. Your view would be reasonable as matched against your philosophy. Again, it depends if a person has a philosophical view that could accept such a notion. If the belief is that everything that exists is physical, then your point here would not be rational. The science would have nothing to do with it except to be consistent with one view or another.

    For example, science can define if sometning has mass or not. Some entities in reality have mass, others don’t. This is a scientific statement.

    I wouldn’t call that a “definition”. It is more like a classification. Science cannot define what “mass” is. There is no observation in nature that we can make to tell us that “this is the correct definition of mass”. In fact, there could be a philosophical view that does not recognize mass as an independent thing that could be classified. But there is a consensus view that has defined mass as a characteristic. Then science observes things and classifies them to see if they share what that thing (mass) is or not.

  187. 187
    gpuccio says:

    Silver Asiatic:

    I have not the time now to answer all, but I want to clarify one point that is important, and that was not clear probably for my imprecision.

    When I say:

    “That’s why, as I have explained, I strive a lot to avoid, as much as humanly possible, any influence of my philosophy or theology on my scientific reasonings.”

    I am not including in that statement philosophy of science. My mistake, I apologize, I should have specified it, but you cannot think of everything.

    Of course I believe that our philosophy of science can and must guide the way we do science. Probably, it seemed so obvious to me that I did not think of specifying it.

    What I meant was that our philosophy about everything else must not influence, as far as that is possible, our scientific reasoning.

    As I have said, there is good science and bad science, good philosophy of science and bad philosophy of science. One is responsible both for his science and for is philosophy of science. But of course we have a duty to do science according to our philosophy of science. What else should we do?

    However, even if of course there can be very different philosophies of science, some basic points should be very clear. I think that almost all who do good science would agree about the basic importance of facts in scientific reasoning. So, any philosophy of science, and related science, that does not put facts at the very center of scientific reasoning is a bad philosophy of science. For me (because I assume full responsibility for that statement), but not in the sense that I consider that a subjective aspect. For me, that is an objective requirement of a good philosophy of science.

    OK, more later.

  188. 188
    gpuccio says:

    Silver Asiatic:

    At the risk of irritating you, I feel the need to repeat something continually through my response – and that is, almost everything you said was a philosophical discourse. I have been discussing on one of KF’s threads the objective foundation of philosophy, but after that (which is minimal) philosophy is almost entirely subjective. We can freely choose among options.

    I disagree. My discourses here are rarely philosophical. Well, sometimes. But all my reasonings about ID detection, functional information, biology, functional information in biology, homologies, common descent, and so on, in practice most of what I discuss here is perfectly scientific, and in no way philosophical.

    Of course, as said, my science is always guided by my philosophy of science. I take full responsibility for both.

    And I fully disagree that “philosophy is almost entirely subjective”. That’s not true. There is much subjectivity in all human activities, including philosophy, science, art, and so on. But there is also a lot of objectivity in all those things.

    One thing is certainly true: “We can freely choose among options.” Of course. In everything.

    We can freely pursue what is true or what is wrong. What is good or what is bad. We can freely lie, or fight for what we believe to be true. We can freely love or hate. And so on. I think I give the idea.

    Does that mean that truth, good, lies, love, are in no way objective?

    I don’t believe that. But of course you can freely choose what to believe.

    And yes, this is a philosophical statement.

  189. 189
    Silver Asiatic says:

    GP

    For me (because I assume full responsibility for that statement), but not in the sense that I consider that a subjective aspect. For me, that is an objective requirement of a good philosophy of science.

    Right. Based on your philosophy and worldview it is objective. That is consistent and makes sense. Philosophically, you call some things “facts” and then you use those in your scientific reasoning. You have an overall understanding of reality. I’ll suggest that you cannot really separate “everything else” of your philosophy from your scientific view. As I see it, they’re all connected. This is especially true when you seek to talk about a designer or things like randomness or immaterial, natural, entities — all of these things.

    This is where I agree that “ID is science” as long as “ID lines up with my philosophy of science”. To me, that is consistent and reasonable (although whether the philosophy and definitions should be aligned could be debated).

    Someone like Dawkins will say “ID is not science” because he thinks that ID does not line-up with his philosophy of science. He has just defined ID out of the question. Dawkins will fail if he says: “My philosophy is consistent and rational and my science follows this”. But then later he indicates that he will not accept conclusions that his own scientific philosophy will support. Then he’s got a problem.

    I always thought that’s what ID was trying to do. Use Dawkins’ own worldview and his own claims – all the things he already accepts — and show that ID is the most reasonable conclusion. It would all be based on his (or mainstream) science.

    I know some creationists who say ID is “dishonest” because the worldview is concealed, but I think ID is just trying to play by the rules of the game (consensus view) and show that there is evidence for Design even using mainstream evolutionary views.

  190. 190
    Silver Asiatic says:

    GP

    We can freely pursue what is true or what is wrong. What is good or what is bad. We can freely lie, or fight for what we believe to be true. We can freely love or hate. And so on. I think I give the idea.

    I realize that this may seem irritating, but I even caught myself with that. There are people, perhaps, who think that all of our actions are determined by some cause. It’s the whole question of free-will.
    My point here is that I think a coherent philosophy, beginning with first principles, has to be in place. After that, the people that we talk with have to either understand, or better, accept our philosophy.
    If they have a bad philosophy, then I think the problem is to help them fix that. I think that has to happen before we can even get into the science.

    My philosophy is rooted in classical Western theism and is linked to my theological views. I am leaning more and more to the idea that it is not worth the effort to adopt “Dawkins philosophy/science” for the sake of trying to convince people, and that it may be more effective to start with the clash of philosophies and world-views rather than start with science (ID). Not sure, but I am leaning that way. Putting philosophy and theology first, and then using ID inferences to support that might work better.

  191. 191
    gpuccio says:

    Silver Asiatic:

    I always thought that’s what ID was trying to do. Use Dawkins’ own worldview and his own claims – all the things he already accepts — and show that ID is the most reasonable conclusion. It would all be based on his (or mainstream) science.

    That’s definitely what ID is trying to do. That’s certainly what I am trying to do.

    I am leaning more and more to the idea that it is not worth the effort to adopt “Dawkins philosophy/science” for the sake of trying to convince people, and that it may be more effective to start with the clash of philosophies and world-views rather than start with science (ID). Not sure, but I am leaning that way. Putting philosophy and theology first, and then using ID inferences to support that might work better.

    Maybe. But I think the two thinks can and should work in parallel. There is no conflict at all, as far as each activity is guided by its good and pertinent philosophy! 🙂

    And, at least for me, the purpose is not to convince anyone, but to offer good idea to those who may be interested in them. In the end, I very much believe in free will, and free will is central not only in the moral, but also in the cognitive sphere.

  192. 192
    gpuccio says:

    To all:

    Again about crosstalk.

    It seems that our NF-kB system is continuously involved in crosstalks of all types.

    This is about crosstalk with the system of nucleoli:

    Crosstalk between NF-kB and Nucleoli in the Regulation of Cellular Homeostasis

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6210184/

    Abstract
    Nucleoli are emerging as key sensors of cellular stress and regulators of the downstream consequences on proliferation, metabolism, senescence, and apoptosis. NF-kB signalling is activated in response to a similar plethora of stresses, which leads to modulation of cell growth and death programs. While nucleolar and NF-kB pathways are distinct, it is increasingly apparent that they converge at multiple levels. Exposure of cells to certain insults causes a specific type of nucleolar stress that is characterised by degradation of the PolI complex component, TIF-IA, and increased nucleolar size. Recent studies have shown that this atypical nucleolar stress lies upstream of cytosolic I?B degradation and NF-kB nuclear translocation. Under these stress conditions, the RelA component of NF-kB accumulates within functionally altered nucleoli to trigger a nucleophosmin dependent, apoptotic pathway. In this review, we will discuss these points of crosstalk and their relevance to anti-tumour mechanism of aspirin and small molecule CDK4 inhibitors. We will also briefly the discuss how crosstalk between nucleoli and NF-kB signalling may be more broadly relevant to the regulation of cellular homeostasis and how it may be exploited for therapeutic purpose.

    Emphasis mine.

    And this is about crosstalk with Endoplasmic Reticulum:

    The Crosstalk of Endoplasmic Reticulum (ER) Stress Pathways with NF-kB: Complex Mechanisms Relevant for Cancer, Inflammation and Infection.

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6027367/

    Abstract
    Stressful conditions occuring during cancer, inflammation or infection activate adaptive responses that are controlled by the unfolded protein response (UPR) and the nuclear factor of kappa light polypeptide gene enhancer in B-cells (NF-kB) signaling pathway. These systems can be triggered by chemical compounds but also by cytokines, toll-like receptor ligands, nucleic acids, lipids, bacteria and viruses. Despite representing unique signaling cascades, new data indicate that the UPR and NF-kB pathways converge within the nucleus through ten major transcription factors (TFs), namely activating transcription factor (ATF)4, ATF3, CCAAT/enhancer-binding protein (CEBP) homologous protein (CHOP), X-box-binding protein (XBP)1, ATF6? and the five NF-kB subunits. The combinatorial occupancy of numerous genomic regions (enhancers and promoters) coordinates the transcriptional activation or repression of hundreds of genes that collectively determine the balance between metabolic and inflammatory phenotypes and the extent of apoptosis and autophagy or repair of cell damage and survival. Here, we also discuss results from genetic experiments and chemical activators of endoplasmic reticulum (ER) stress that suggest a link to the cytosolic inhibitor of NF-kB (IkB)alpha degradation pathway. These data show that the UPR affects this major control point of NF-kB activation through several mechanisms. Taken together, available evidence indicates that the UPR and NF-kB interact at multiple levels. This crosstalk provides ample opportunities to fine-tune cellular stress responses and could also be exploited therapeutically in the future.

    Emphasis mine.

    Another word that seems to recur often is “combinatorial”.

    And have you read? These two signaling pathways “converge within the nucleus through ten major transcription factors (TFs)”. Wow! 🙂

  193. 193
    OLV says:

    GP,

    the topic you chose for this OP is fascinating indeed.

    Here’s a related paper:

    Transcription factor NF-kB in a basal metazoan, the sponge, has conserved and unique sequences, activities, and regulation
    Leah M. Williams, Melissa M. Inge, Katelyn M. Mansfield, Anna Rasmussen, Jamie Afghani, Mikhail Agrba, Colleen Albert, Cecilia Andersson, Milad Babaei, Mohammad Babaei, Abigail Bagdasaryants, Arianna Bonilla, Amanda Browne, Sheldon Carpenter, Tiffany Chen, Blake Christie, Andrew Cyr, Katie Dam, Nicholas Dulock, Galbadrakh Erdene, Lindsie Esau, Stephanie Esonwune, Anvita Hanchate, Xinli Huang, Timothy Jennings, Aarti Kasabwala, Leanne Kehoe, Ryan Kobayashi, Migi Lee, Andre LeVan, Yuekun Liu, Emily Murphy, Avanti Nambiar, Meagan Olive, Devansh Patel, Flaminio Pavesi, Christopher A. Petty, Yelena Samofalova, Selma Sanchez, Camilla Stejskal, Yinian Tang, Alia Yapo, John P. Cleary, Sarah A. Yunes, Trevor Siggers, Thomas D. Gilmore

    doi: 10.1101/691097
     

    Biological and biochemical functions of immunity transcription factor NF-?B in basal metazoans are largely unknown. Herein, we characterize transcription factor NF-?B from the demosponge Amphimedon queenslandica (Aq), in the phylum Porifera. Structurally and phylogenetically, the Aq-NF-?B protein is most similar to NF-?B p100 and p105 among vertebrate proteins, with an N-terminal DNA-binding/dimerization domain, a C-terminal Ankyrin (ANK) repeat domain, and a DNA binding-site profile more similar to human NF-?B proteins than Rel proteins. Aq-NF-?B also resembles the mammalian NF-?B protein p100 in that C-terminal truncation results in translocation of Aq-NF-?B to the nucleus and increases its transcriptional activation activity. Overexpression of a human or sea anemone I?B kinase (IKK) can induce C-terminal processing of Aq-NF-?B in vivo, and this processing requires C-terminal serine residues in Aq-NF-?B. Unlike human NF-?B p100, however, the C-terminal sequences of Aq-NF-?B do not effectively inhibit its DNA-binding activity when expressed in human cells. Tissue of another demosponge, a black encrusting sponge, contains NF-?B site DNA-binding activity and an NF-?B protein that appears mostly processed and in the nucleus of cells. NF-?B DNA-binding activity and processing is increased by treatment of sponge tissue with LPS. By transcriptomic analysis of A. queenslandica we identified likely homologs to many upstream NF-?B pathway components. These results present a functional characterization of the most ancient metazoan NF-?B protein to date, and show that many characteristics of mammalian NF-?B are conserved in sponge NF-?B, but the mechanism by which NF-?B functions and is regulated in the sponge may be somewhat different.

  194. 194
    gpuccio says:

    To all:

    OK, this very recent paper (published online 2019 Jan 8) seems to be exactly about what I discuss in the OP:

    On chaotic dynamics in transcription factors and the associated effects in differential gene regulation

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6325146/

    The abstract:

    Abstract

    The control of proteins by a transcription factor with periodically varying concentration exhibits intriguing dynamical behaviour. Even though it is accepted that transcription factors vary their dynamics in response to different situations, insight into how this affects downstream genes is lacking. Here, we investigate how oscillations and chaotic dynamics in the transcription factor NF-kB can affect downstream protein production. We describe how it is possible to control the effective dynamics of the transcription factor by stimulating it with an oscillating ligand. We find that chaotic dynamics modulates gene expression and up-regulates certain families of low-affinity genes, even in the presence of extrinsic and intrinsic noise. Furthermore, this leads to an increase in the production of protein complexes and the efficiency of their assembly. Finally, we show how chaotic dynamics creates a heterogeneous population of cell states, and describe how this can be beneficial in multi-toxic environments.

    I think I will read it carefully and come back about it later. 🙂

  195. 195
    gpuccio says:

    To all:

    The paper linked at #194 is really fascinating. I given it a first look, but I will certainly go back to digets better some aspects (probably not the differential equations! 🙂 ).

    Two of the authors are from the Niels Bohr Institue in Copenaghen, a really interesting institution. The third author is from Bangalore, India.

    For the moment, let’s start with the final conclusion (I have never been a tidy person! 🙂 ):

    Chaotic dynamics has thus far been underestimated as a means for controlling genes, perhaps because of its unpredictability. Our work shows that deterministic chaos potentially expands the toolbox available for single cells to control gene expression dynamically and specifically. We hope this will inspire theoretical and experimental exploration of the presence and utility of chaos in living cells.

    The emphasis on “toolbox” is mine, and the reason I have added it should be rather self-evident. 🙂

    Let’s think about that.

  196. 196
    gpuccio says:

    To all:

    Indeed, I have not been really precise at #194, I realize. I said:

    “OK, this very recent paper (published online 2019 Jan 8) seems to be exactly about what I discuss in the OP:”

    But that is not really true. This paper indeed adds a new concept to what I have discussed in the OP. In fact the paper, while briefly discussing also random noise, is mainly about the effects of a chaotic system, something that I had not considered in any detail in my OP. My focus there has been on random noise and far from equilibrium dynamics. Chaos systems certainly add a lot of interesting perspective to our scenario.

  197. 197
    gpuccio says:

    OLV at #193:

    Interesting paper.

    Indeed I blasted the human p100 protein agains sponges, and there is a good homology (total bitscore 523 bits).

    So yes, the system is rather old in metazoa.

    Consider that the same protein, blasted against single celled eukaryotes, gives only a low homology (about 100 bits), limited to the central ANK repeats. No trace of the DNA binding domain.

    So, the system seems really to arise in Metazoa, and very early.

  198. 198
    EugeneS says:

    GP #129,

    Thanks very much. I will give it a read.

    Life comes from life, once it has been started, that is for sure. However, it does not apply equally either to creation (design, for the purposes of this discussion) or to imagined abiogenesis. It is clear that the vitalistic rule is violated in the case of abiogenesis, but it is also violated in the case of design because the relation between the designer and the designed is not that of birth/descent. It can be more likened to the relation between the painter and the painting. Fundamentally, the painting is of a different nature from the painter, whereas descent implies the same nature between the ancestor and the progeny.

    As an aside, a grumpy remark, I do not like the new GUI on this blog 😉 The old one was way better. This one feels like one of .gov British sites for the plain English campaign. It is less convenient when accessed with a mobile phone. But it does not matter…

  199. 199
    Silver Asiatic says:

    EugeneS

    However, it does not apply equally either to creation (design, for the purposes of this discussion) or to imagined abiogenesis. It is clear that the vitalistic rule is violated in the case of abiogenesis, but it is also violated in the case of design because the relation between the designer and the designed is not that of birth/descent. It can be more likened to the relation between the painter and the painting. Fundamentally, the painting is of a different nature from the painter, whereas descent implies the same nature between the ancestor and the progeny.

    That is a great point and analogy. Yes, I think where there is design then there is a purposeful, creative act and what follows from that cannot be considered descent for the reason you give.

  200. 200
    gpuccio says:

    EugeneS:

    That is an important point.

    The question is: can life be reduce to the designed information that sustains it?

    If that is the case, then design explains everything, both at OOL and later.

    If the anwer is no, all is different.

    As we still don’t understand what life is, from a scientific point of view, we have no final scientific answer. My personal opinion is that the second option is true, and that would explain why in our experience life comes only from life.

    If life cannot be reduced to the designed information that sustains it, then certainly OOL is a case where both a lot of designed functional information appears and life is started, whatever that implies.

    For what happens after OOL, all depends on the model one accepts. I don’t know if you have followed the discussion here between BA and me. In particular, the three possible models I have discussed at #43.

    In my model (model b in that post) after OOL things happen by descent with added design. So, in that model, it is true after OOL that life always comes from life (if the descent is universal), and only OOL would be a special event in that sense. The new functional information, in all cases, is the product of design interventions.

    In model c, instead, each new “kind” (to use BA’s term) is designed from scratch at some time. So, the appearance of each new kind has the same status as an OOL event.

    Model a is just the neo-darwinian model, where everything, at all times, happens by RV + NS, and no design takes place, least of all a special, information independent start of life.

  201. 201
    john_a_designer says:

    Gp,

    I am still trying to define precisely what a transcription factor is. Earlier @ 91, I asked ”are there transcription factors for prokaryotes?” According to Google, no.

    Eukaryotes have three types of RNA polymerases, I, II, and III, and prokaryotes only have one type. Eukaryotes form and initiation complex with the various transcription factors that dissociate after initiation is completed. There is no such structure seen in prokaryotes.

    https://uncommondescent.com/intelligent-design/controlling-the-waves-of-dynamic-far-from-equilibrium-states-the-nf-kb-system-of-transcription-regulation/#comment-680819

    (But maybe what I am not understanding is the result of a difference of semantics, context or nuance.)

    Recently, I ran across another source which seemed to suggested that prokaryotes do have TF’s.

    What has to happen for a gene to be transcribed? The enzyme RNA polymerase, which makes a new RNA molecule from a DNA template, must attach to the DNA of the gene. It attaches at a spot called the promoter.

    In bacteria, RNA polymerase attaches right to the DNA of the promoter. You can see how this process works, and how it can be regulated by transcription factors, in the lac operon and trp operon videos.

    In humans and other eukaryotes, there is an extra step. RNA polymerase can attach to the promoter only with the help of proteins called basal (general) transcription factors. They are part of the cell’s core transcription toolkit, needed for the transcription of any gene.

    https://www.khanacademy.org/science/biology/gene-regulation/gene-regulation-in-eukaryotes/a/eukaryotic-transcription-factors

    This article seems to suggest that the lac operon is a transcription factor but then in the next paragraph it states: “In humans and other eukaryotes, there is an extra step. RNA polymerase can attach to the promoter only with the help of proteins called basal (general) transcription factors.”

    So is the lac operon a transcription factor? Is the term operon synonymous with transcription factor or is there a difference? In other words, do “operons” have same role in transcription as TF’s?

    Is there a strong homology between the lac operon which turns on the gene for lactose metabolism in e coli and the TF/lactose metabolism gene in eukaryotes, including humans? Does this have anything to do with lactose intolerance?

  202. 202
    gpuccio says:

    John_a_designer:

    OK, that’s how I see it.

    In eukaryotes we must distinguish between general TFs, which act in much the same way in all genes and are required to initiate transcription by helping recruit RNA polymerase at the promoter site, and specific TFs, that bind at enhancer sites and activate or repress transcription of specific genes. The NF-kB system described in the OP is a system of specific TFs.

    Now, in eukaryotes there are six general TFs. Archea have 3. In bacteria sigma factors have the role of general TFs. Sigma factors, archaeal general TFs and eukaryotic general TFs seem to share some homology. I think that the archaeal system, however, is much more similar to the eukaryotic system, and that includes RNA polymerases.

    Then bacteria have a rather simple system of repressors or activators, specific for specific genes, or better operones. Those repressors and activators bind DNA near the promoter of the specific operon. They are in some way the equivalent of eukaryotic specific TAs, but the system in by far simpler.

    You can find some good information about bacteria here:

    https://bio.libretexts.org/Bookshelves/Cell_and_Molecular_Biology/Book%3A_Cells_-_Molecules_and_Mechanisms_(Wong)/9%3A_Gene_Regulation/9.1%3A_Prokaryotic_Transcriptional_Regulation

    The operon is simply a collection of genes that are physically near, are transcribed together from one single promoter, and are functionally connected.

    So, the lac operon is formed by three genes, lacZ, lacY, lacA, sharing one promoter. A sigma factor binds at the promoter, together with RNA polymerase. A repressor and an activator may bind DNA near the promoter to regulate operon transcription.

    While archaea are more similar to eukaryotes in the system of general TF, the regulation of transcription by one or two suppressor or activator seems to be similar to what described for bacteria.

    Finally, there is another important aspect where archaea are more similar to eukarya. Their chromatin structure is based on histones and nucleosome, like in eukaryotes, but the system is rather different from the corresponding eukaryotic system.

    Instead, bacteria have their form of DNA compression, but it is not based on histones and nucleosomes.

    This, as far as I can understand.

  203. 203
    john_a_designer says:

    Thank you Gp,

    The link you provided cleared up some misunderstanding on my part (operons are not TF’s but are grouping of genes that TF’s help activate) and clarified a number of other things.

  204. 204
    gpuccio says:

    To all:

    From the paper above mentioned, a paragraph about the difference between random noise and chaos.

    What is chaos?
    When we speak of chaos, we refer to deterministic chaos. Deterministic means that if one knows the initial state of the system exactly, then the dynamical trajectory will be the same every time it is initiated in that state. However, any two initial conditions infinitesimally apart will have exponentially diverging trajectories as time proceeds making it practically impossible to predict the future dynamics—hence chaos28–31. It is important to note that the unpredictability of chaos does not arise from stochasticity—the latter refers to a non-deterministic system with noise. Noise is observed in most real-world systems and can often result in very different dynamics than the deterministic version of the same system. For example, noise can cause transitions between different states which would never occur if the system were deterministic. Thus, both deterministically chaotic and noisy systems exhibit unpredictability of their future trajectories, but for very different underlying reasons.

  205. 205
    gpuccio says:

    To all:

    The paper is about a simplified model of interaction between two different oscillating systems, NF-kB and TNF. The interaction between the two can generate, in some circumstancases, a chaotic system.

    Our investigation starts with a model of the transcription factor NF-kB that is known to exhibit oscillatory dynamics3,9,22. A schematic version of this is found in Fig. 1a and a full description is presented in the Supplementary Note 1. In this deliberately simplified model, the oscillations arise from a single negative feedback loop between NF-kB and its inhibitor IkB?, and can be triggered by TNF via the activation of the IkB kinase (IKK). We then allow TNF to oscillate.

    Indeed, the main cause of the oscillations in the NF-kB system seems to be due to the alternating degradation of the IkB alpha (the inhibitor), IOWs the activation of the dimer, and the re-synthesis of IkB alpha, a form of negative feedback.

  206. 206
    pw says:

    GP,

    at what point it is believed that the oscillations in the NF-kB system appeared for the first time in biological history?

  207. 207
    gpuccio says:

    Pw:

    I don’t think we have any idea about that.

  208. 208
    gpuccio says:

    To all:

    OK, back to the paper about chaos.

    So, the general idea is that, in the simplified (but very precise) model used by the authors, fixed priod (50 minites) oscillations in TNF concentration can act as an “external signal”, so that the NK-kB oscillation “locks on to the external signal’s frequency and phase” (Fig. 1c, bottom line).

    But, according to the amplitude of the TNF oscillations, the effect changes. While for low amplitudes there is the “locking” effect, intermediate amplitudes transalte into some regular variation of the NF-kB amplitude, IOWs generate “multi stable cycles” of different amplitude in the NK-kB oscillations (Fig. 1c, intermediate line).

    Finally, if the amplitude of the external signal furtherly increases, the system becomes chaotic, and the amplitude of the NF-kB system oscillations becomes completely unpredictable.

    OK, that’s very interesting.

    But the important point is: according to the authors, these variation of pattern in the NF-kB signal, induced vy variation in the amplitude of the external signal (TNF), will have definite effects on downstream transcription patterns in the nucleus. Indeed, the point made by the authors is that the chaotic pattern induces by high amplitudes in the external signall will have a definite and robuts effect: it will enhance transcription of genes with low affinity for the NK-kB TFs, (LAGs). In the other scenarios, instead, transcription of high affinity genes (HAGs) or medium affinity genes (MAGs) will prevail.

    And the idea is that such a robust effect of an unpredictable pattern may well have a specifi role in transcription regulation, IOWs it can be a supplementary “tool” in the functional regulation of what genes are transcribed, and therefore of the type and level of the response.

    Well, isn’t that interesting?

    Of course, to do that in a functional way, there is the basic need that the high amplitude of the external signal and the chaotic pattern be correctly associated, so that the right signal generates the correct response. And that association is obviously semiotic.

    So, is all this is true, the NF-kB system is not only a wonderful polymorphic semiotic system (see post #181), but also a semiotic system that uses, as a tool to connect the right stimulus to the right response, not only the usual biochemical configuration patterns, but also a very peculiar physical and mathematical effect of the type of oscillations involved in two different and separate systems.

    Wow! 🙂

  209. 209
    gpuccio says:

    To all:

    NK-kB is not the ony TF system that presents oscillations in concentration and nuclear occupancy. Another important example is p53:

    Conservation and divergence of p53 oscillation dynamics across species

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5687840/

    Summary
    The tumor suppressing transcription factor p53 is highly conserved at the protein level and plays a key role in the DNA damage response. One important aspect of p53 regulation is its dynamics in response to DNA damage, which include oscillations. Here, we observe that while the qualitative oscillatory nature of p53 dynamics is conserved across cell lines derived from human, monkey, dog, mouse and rat, the oscillation period is variable. Specifically, rodent cells exhibit rapid p53 oscillations, whereas dog, monkey and human cells show slower oscillations. Computational modeling and experiments identify stronger negative feedback between p53 and MDM2 as the driver of faster oscillations in rodents, suggesting that an oscillation’s specific period is a network-level property. In total, our study shows that despite highly conserved signaling, the quantitative features of p53 oscillations can diverge across evolution. We caution that strong amino acid conservation of proteins and transcriptional network similarity do not necessarily imply conservation of time dynamics.

    p53 is a very important tumor suppressor gene, involved mainly in the response to DNA damage.

  210. 210
    PeterA says:

    GP @208:

    “the NF-kB system is not only a wonderful polymorphic semiotic system (see post #181), but also a semiotic system that uses, as a tool to connect the right stimulus to the right response, not only the usual biochemical configuration patterns, but also a very peculiar physical and mathematical effect of the type of oscillations involved in two different and separate systems.”

    I think in this case “Wow!” is an understatement. 🙂

  211. 211
    pw says:

    I still didn’t quite understand how old is the NK-kB, what it evolved from, how that could happen.

  212. 212
    john_a_designer says:

    Gpuccio,

    Here’s a question I have:

    Is what we perceive here as chaos just the result of overwhelming complexity and complex interactions of interacting, overlapping and numerous dynamic systems?

    Since my background is not in the biological sciences but in mechanical engineering– specifically machine design– I try to find analogies from the world of machines and machine systems to help me understand what is happening or maybe happening with biochemical “molecular machines” and biological systems.

    The analogy I started to think of from reading over the paper (cited @ 194) was urban traffic flow which from a time-lapse birds-eye-view can appear to be chaotic and even at times without rhyme or reason.

    Here, for example, are several time lapse video clips of street and highway traffic in Atlanta, Georgia in the U.S.

    https://www.youtube.com/watch?v=zOu-f-GdfhU

    While there is a continuous dynamic flow of traffic, it also at times appears to be chaotic as cars and trucks appear to more or less at random change or merge from one lane of traffic to another, or stop at a street intersection to make a turn etc. If, however by analogy, we take a “microscopic” view of what each car or truck is doing we find that every vehicle has a destination and a purpose for its travel. What make the scene appear to be chaotic is that the individual travelers have different destinations and different purposes for their travel. For example, there may be some of travelers who are going to work or out to dinner or out to a sporting event or out shopping or going back home. There may be trucks delivering supplies and merchandise to businesses… or, there may be fire and rescue vehicles speeding to an accident or a fire or police responding to a crime. It appears to me that there is something like that going on in individual cells which is just compounded astronomically when we consider the complexity of higher organisms as a whole.

    Of course as with all analogies the analogy breaks down. At present, at least until self-driving cars and trucks become widely available and viable, each car or truck is under the control of an intelligent agent. Biological systems are more analogous a world full of robots with the robots maintaining and propagating the system. To paraphrase Abraham Lincoln the robots would be of the system, by the system and for the system. Nevertheless, I still think on some level such a system would appear to be very chaotic but that would be due to its overwhelming complexity.

    If such systems were truly chaotic they would cease to function correctly and eventually cease to function at all. The overwhelming complexity, of course, is evidence of design.

  213. 213
    gpuccio says:

    John_a_designer at #212:

    The questions you ask are very good, and the subject is not so intuitive as it could seem. I will try to express how I understand it, but of course I am ready to consider any contribution about this important point.

    The first important thing is that we must not confound randomness and chaos. I have quoted at #204 a paragraph from the paper which tries to explain the difference between the two. However, I must say that I am not completely happy wuth what is said there.

    My first point is that we are dealing here with sustems that are, in essence, deterministic. Both chaotic systems and random systems are deterministic, in the sense that waht happens in those system is in the end governed by necessity laws, in particular the laws of physics or chemistry. I ahve said many times that the only field of science that probably implies a true randomness, what we could call intrinsic randomness, is quantum mechanics. In quantum mechanics, the wave function, if and when it collapses, collapses according to probabilitstic distributions that are, probably (it dependd on the intepretations), intrinsically random.

    In all other nonquantum systems, we assume that the laws ofn physics are the real laws that govern the evolution of the system, and those laws, if not at quamtum level, are deterministic laws. Indeed, even quantum mechanics is mainly deterministic: the wave function evolves in a completely deterministic way, unless and until it collapses.

    So, both random systems and chaotic systems, if we are not considering quantum effects, are completely deterministic systems.

    So, what is the difference between what we call a deterministic system and what we call a random system?

    As I have said many times, the difference is only in how we can describe the system and its evolution.

    Let’s consider a simple deterministic system. Let’s say that we have a gear with two kinds of teeth, one kind shorter and one kind longer. Let’s say that the gear os rotating at a constant rate, and it interacts with another gear so that the long teeth evoke one type of output, and the shorter teeh evoke another type of output. So, we have a cyclic output with two states, which can be well predicted knowing the configuration of the first gear.

    This is, very simply, a deterministic system, in the sense that we can fully describe it in terms of its initial configuration, and know with reasonable precision how the system will behave.

    Not, let’s take instead a simple random system: the classic tossing of a fair coin. Here, too, the system is in essence deterministic: each coin tossing completely obeys the laws of classical mechanics. If we can know all the initial conditions of the tossing, we can, maybe with complex computations, know exactly if the result will be a head or a tail.

    But that is not the case. There is no way we can know all the variables involved. Because there are too many of them, and we cannot measure or control all of them. The consequence is that we cannot ever know for certain if one specific tossing will give a head or a tail.

    So, are we completely powerless in front of such a system? can we say nothing that helps us describe it?

    No. If the coin is fair, we know that, on a big number of tossings, the percentage of heads and tails will be similar. Not exactly the same, but very much similar, and ever more similar if we increase the number of tossings.

    This is a probabilistic description. We are applying a mathematical object, an uniform probability distribution where only two events are possible, and each has a probability of 0.5, to describe with some efficiency a simple system that we cannot describe in any other way.

    This is randomness. The impossibility to compute a single event, but the possibility to describe with some precision a general distribution.

    Now, there is no need that the probability dostribution is uniform. And there is no need that no necessity effect is detectable. In most real systems, including biological systems, random noise is mixed with necessity effects. If the random noise is strong enough, so that it cannot be ignored, the system is still random.

    Let’s consider an unfair coin, where an uneven distribution of weight (a necessity effect) is strong enough that it modifies the neutral probability distribution, so that heads have a probability of 0.6 and tails a probability of 0.4. Is the system still random?

    Of course it is. We have no way to know in advance what the result of our next tossing will be. The system is still random, because we can describe it only probabilistically. Still, the uneven distribution tells us that there is some necessity effect that favors heads.

    OK, so this is randomness. Many different variables, that we cannot really measure or control, interact indipendently to generate a configuration that can be described only using a probability distribution. In no case can we know deterministically how the system wiill evolve.

    It is interesting that many random systems in nature are not described well by a uniform distribution, even if loaded, but rather by other probability distributions, first of all the normal distribution. In the normal distribution, the system is random, but certain events are much more likely than others.

    Chaos is another thing. Chaotic systems are deterministic systems, sometimes simple enough, where some special form of the mathematics that describes the system makes the evolution of the system extremely sensitive to small initial variations in the starting conditions. In the example of the model described in the paper, oscillations in the external signal determine the period and amplitude of the oscillations in the NF-kB system. If the amplitude in the external signal is low, the two systems are simply syncronized. That is a deterministic system.

    But, if the amplitude of the external signal increases, if it is very big, then the mathematics governing the interaction between the two systems becomes chaotic: while the oscillations of the external signal remain regular, the oscillations in the NF-kB system become completely unpredictable in amplitude. That is chaos. The system is still simple, two systems are essentially interacting. The scenario seems not different from the scenario where the two system are simply synchronized. But, suddenly, a simple increase in the amplitude of the external system changes the mathematical realtionships, and the response of the NF-kB system becomes chaotic.

    Now, let’s go back to your example of traffic. I am not completely sure, but I would say that that is a random system, not necessarily a chaotic system. Here the lack of order is due to the many variables involved, that interact independently. In a sense, it is like the tossing of the coin.

    It is true that “every vehicle has a destination and a purpose for its travel”, as it is true that the coin obeys precise laws when it is tossed. But there are too many vehicles, and their destinations are unrelated and independent. That generates a random configuration that we cannot anticipate with precision, because we should know in advance all the destinations and purposes, and even the driving style or mood of each driver, and so on. We can’t. So, at best, we can describe the system by some probability distribution: maybe there is more probability of having traffic in one direction at certain times, and so on.

    The important point in the paper quote is not so much that two systems can interact in a chaotic way: that happens sometimes in physical systems. The amazing point is that such interaction can be generated by specific biological stimuli (for example by regulating the amplitude of the oscillation in the TNF system), so that chaos is generated in the NK-kB system, and that such chaotic response can change in a robust way the pattern of genes that are activated (for example favoring low affinity genes), and that this whole system is functional. IOWs, as I have said, a specific signal is semiotically connected to the correct, complex response, involving hundreds of different genes, by a translation system that uses (among other tools) the induction of a chaotic state to link the two things.

  214. 214
    Sven Mil says:

    Gpuccio, it worries me that your method is unable to detect homology between proteins that are similar with respect to structure and virtually identical with respect to function.

    “I have no interest in denying possible weak homologies. I just ignore them, because they are not relevant to my argument.”

    Not relevant? Just ignore them?
    Your argument consists of pointing to these “large jumps in homology”, but isn’t that what we’d expect to see if you can’t detect low homology?

    If you could only see things 2 miles above sea level would you assume that planes never land and that birds don’t actually exist?
    How much are you actually missing? A whole lot I’d bet.

    It seems like your method is extremely biased and capable only of detecting “steady-state”, not the actual evolutionary steps we’re interested in.

  215. 215
    gpuccio says:

    Sven Mil:

    The “virtually identical with respect to function” seems to be your imagination, certainly in relation to the case we were discussing (the supposed homologies between sigma factor 700 and human TFIIB). How can you even think, least of all state so boldy, that those two proteins are “virtually identical with respect to function”?. That is a very telling indication of how serious your attitude is.

    Moreover, your “argument” seems to be that as I am not trying to detect very weak homologies, the extremely strong jumps in human conserved information that I do detect in short evoluyionary times are explained. By what? By weak homologies that have nothing to do with those strong specific sequences that appear suddenly, that are conserved for hundreds of million of years, and that anyone can easily detect?

    Is that even the start of an argument? No. It is just false reasoning, of the worst kind.

    So, if you have anything interesting to say, please say it. If you can point to any credible pathway that can explain the appearance of thousands of bits of new functional information, through anything that you can detetct in the genomes and proteomes, pleaso do it. If you have any hint of the functional intermediates that are nowhere to be seem at the molecular level for that well detectable information, please show that to us.

    On one point you are certainly right: my method to measure funtional information by homology conservation for long evolutionary times as shown by the Blast algorithm is, in one important sense, biased: it certainly underestimates the true functional information, as I have shown many times here.

    Have a good time.

  216. 216
    PeterA says:

    I agree that any argument against GP’s method for quantification of complex functional information in proteins, should clearly present a “credible pathway that can explain the appearance of thousands of bits of new functional information”.

  217. 217
    PeterA says:

    GP,

    “my method to measure funtional information by homology conservation for long evolutionary times as shown by the Blast algorithm is, in one important sense, biased: it certainly underestimates the true functional information, as I have shown many times here.”

    Is this because you may ignore functional information if the number of bits is less than certain threshold value for the number of bits that perhaps is very high?
    IOW, your method is very rigorous?

  218. 218
    gpuccio says:

    PeterA:

    “Is this because you may ignore functional information if the number of bits is less than certain threshold value for the number of bits that perhaps is very high?”

    No. That has nothing to do with the “bias” I have mentioned at #215. That is more or less what Sven Mil “suggested” (to say that he “argued” would really be inappropriate).

    The simple point is: with my method I detect sudden appearances of new functional information at the sequence level. The sequence is what is measured: the blast measures homologies in sequence.

    The procedure is meant to detect differences in huma conserved functional information. Those specific sequences that:

    a) Did not exist before they appear

    b) Are conserved for uindreds of million years after their appearance

    So, if I say that a protein shows an information jump in vertebrates of, say, 1280 bits, like CARD11 (see post #118), I mean that those 1280 bits of homology to the human protein are added in vertebrates to whatever homology to the human form already existed before.

    IOWs, in deuterostomia that are not vertebrates, including the first chordates, there may be any weak homology with the human protein. In the case of CARD11, it is really low, but detectable. Branchiostoma belcherii, e cephalochordate, exhibits 192 bits of homology between its form of CARD11 and the human form. The E value is 6e-37, and therefore the homology is certainly significant.

    IOWs, the protein already existed in chordates that are not vertebrates. In a form that was, however, very different from the human form, even if detectable as homologous.

    But in cartilaginous fishes, more than 1000 new bits of homology to the human protein are added to what already existed. Callorhincus milii exhibits 786 identities, and 1514 bits of homology to the human form. That is an amazing information jump, and it has nothing to do with minor homologies that are not considered or emphasized, as “suggested” by Sven Mil. That increment in sequence homology to the human form is very real, very sudden, and completely conserved. There is no way to explain it, except design.

    The “bias” that I mentioned at #215 consists in the fact that the blast algorithms underestimates the informational value of homologies. It assign about 2 bits to identities, while we know that the potential informational value of an AA identity is about 4.3 bits. Even correcting for many factors, that is a big underestimation, considering that we are dealing with a logatithmic scale.

    Another reason for the underestimation bias is that part of the sequence that is not conserved os often functional too, as I have argued many times, and here too at #29 with the very good example of RelA. I quote my conclusions there:

    “IOWs, my measure of functional information based on conserved homologies through long evolutionary times does measure functional information, but usually underestimates it. For example, in this case the value of 404 bits would measure only the conserved function in the DBD, but it would miss completely the undeniable functional information in the TAD domains, because that information, while certainly present, is not conserved among species.

    This is, IMO, a very important point.”

    So, my procedure to evaluate functional information in proteins is certainly precise enough and reliable, but certainly biased in the sense of underestimation, for at least two important reasons:

    a) The blast algorithm is a good but biased estimator of functional information: it certainly underestimates it.

    b) The functional information in non conserved parts of the sequence is not detected by the procedure.

    So, the simple conclusion is: my values of functional information are certainly a reliable indicator of true functional information in proteins, but the true value of functional information and of information jumps is certainly higher than the value I get from my procedure. IOWs, we can be sure tha the real value of functional information in that protein or in that jump is at least the value given by my procedure.

  219. 219
    PeterA says:

    gpuccio,

    Thanks for the detailed explanation. Now I understand what you meant.

  220. 220
    gpuccio says:

    To all:

    Of course, it’s not only lncRNAs. Let’s not forget miRNAs!

    The functional analysis of MicroRNAs involved in NF-kB signaling.

    https://www.europeanreview.org/article/10746

    For those who love fancy diagrams, have a look at Fig. 1. 🙂

  221. 221
    PeterA says:

    Figure 1. Panoramic view of the NF-?B miRNA target genes and target genes of miRNAs.

    Wow!

  222. 222
    OLV says:

    GP,

    You’re keeping this discussion very interesting. Thanks.

    Here’s a NF-kB article.

    Here’s another NF-kB article.

  223. 223
    OLV says:

    GP,

    you may have opened a can of worms with this OP. 🙂
    This NF-kB seems to be all over the map.

    Another NF-kB paper

    One more NF-kB paper

    and another one

  224. 224
    gpuccio says:

    OLV:

    Thank you for the interesting links.

    The first two papers quoted at #223 are specially intriguing, in the light of all that we have discussed:

    The Regulation of NF-kB Subunits by Phosphorylation

    https://www.mdpi.com/2073-4409/5/1/12/htm

    Abstract: The NF-kB transcription factor is the master regulator of the inflammatory response and is essential for the homeostasis of the immune system. NF-kB regulates the transcription of genes that control inflammation, immune cell development, cell cycle, proliferation, and cell death. The fundamental role that NF-kB plays in key physiological processes makes it an important factor in determining health and disease. The importance of NF-kB in tissue homeostasis and immunity has frustrated therapeutic approaches aimed at inhibiting NF-kB activation. However, significant research efforts have revealed the crucial contribution of NF-kB phosphorylation to controlling NF-?B directed transactivation. Importantly, NF-kB phosphorylation controls transcription in a gene-specific manner, offering new opportunities to selectively target NF-kB for therapeutic benefit. This review will focus on the phosphorylation of the NF-?B subunits and the impact on NF-kB function.

    And:

    The Ubiquitination of NF-kB Subunits in the Control of Transcription

    https://www.mdpi.com/2073-4409/5/2/23/htm

    Abstract: Nuclear factor (NF)-kB has evolved as a latent, inducible family of transcription factors fundamental in the control of the inflammatory response. The transcription of hundreds of genes involved in inflammation and immune homeostasis require NF-kB, necessitating the need for its strict control. The inducible ubiquitination and proteasomal degradation of the cytoplasmic inhibitor of kB (IkB) proteins promotes the nuclear translocation and transcriptional activity of NF-kB. More recently, an additional role for ubiquitination in the regulation of NF-kB activity has been identified. In this case, the ubiquitination and degradation of the NF-kB subunits themselves plays a critical role in the termination of NF-kB activity and the associated transcriptional response. While there is still much to discover, a number of NF-kB ubiquitin ligases and deubiquitinases have now been identified which coordinate to regulate the NF-kB transcriptional response. This review will focus the regulation of NF-kB subunits by ubiquitination, the key regulatory components and their impact on NF-kB directed transcription.

    Phosphorylation and ubiquitination are certainly two very basic levels of regulation of almost all biological processes. They are really everywhere.

  225. 225
    Sven Mil says:

    Hmmm, Gpuccio, where to begin.
    You say (about sigma70 and TFIIB)
    ‘How can you even think, least of all state so boldy, that those two proteins are “virtually identical with respect to function”’

    But previously you have even admitted “Sigma factors are in some way the equivalent of generic TFs”
    (TFIIB is a generic TF)

    And wikipedia apparently says
    ‘sigma factor “is a bacterial transcription initiation factor that enables specific binding of RNA polymerase (RNAP) to gene promoters. It is homologous to archaeal transcription factor B and to eukaryotic factor TFIIB”’
    So both sigma and TFIIB’s main function is to catalyze RNA polymerase initiation.

    And the paper you have cited above says
    “several reports have indicated the possible functional analogy and/or evolutionary relatedness of bacterial ? factors and eukaryotic TFIIB”
    “sigma factors and TFIIB both closely approach catalytic sites indicating direct and similar roles in initiation”
    “Furthermore, sigma factors and TFIIB each have multiple DNA binding helix-turn-helix (HTH) motifs, which typically include three crossing helices and two turns: H1-T1-H2-T2-H3. H3 is referred to as the “recognition helix” because sequences within T2 and toward the N-terminal end of H3 are most important for sequence recognition within the DNA major groove.”

    Sounds to me like the functions of these proteins are virtually identical.

  226. 226
    OLV says:

    Off topic but interestingly related to the concept of complex functional specified information:

    Veins and Arteries Build Hierarchical Branching Patterns Differently: Bottom-Up versus Top-Down
    Kristy Red-Horse, Arndt F. Siekmann
    DOI: 10.1002/bies.201800198

    Article

    Full text

    A tree?like hierarchical branching structure is present in many biological systems, such as the kidney, lung, mammary gland, and blood vessels. Most of these organs form through branching morphogenesis, where outward growth results in smaller and smaller branches. However, the blood vasculature is unique in that it exists as two trees (arterial and venous) connected at their tips. Obtaining this organization might therefore require unique developmental mechanisms.

    arterial trees often form in reverse order.

    initial arterial endothelial cell differentiation occurs outside of arterial vessels. These pre-artery cells then build trees by following a migratory path from smaller into larger arteries, a process guided by the forces imparted by blood flow. Thus, in comparison to other branched organs, arteries can obtain their structure through inward growth and coalescence.

    How hierarchical patterned trees form in diverse tubular organs, such as the kidney, lung, and vasculature, has been of scientific interest for several centuries.

    establishment of the vasculature follows unique developmental processes, guided by distinct mechanisms important for obtaining proper hierarchical structure and optimal organ function.

    The vasculature consists of two interconnected trees. Schematized drawing of arterial and venous blood vessel trees. Note that the two trees are interconnected at their tips, allowing blood to flow from the arteries to the veins.

    Although all hierarchical in nature, arteries in different organs and different organisms exhibit slightly different structures.

    Advances in our understanding of how arteries are constructed has revealed that arterial trees can form in a unique manner with respect to other hierarchically branched structures—via inward growth rather than outward branching morphogenesis

    distinct mechanisms can be responsible for the establishment of hierarchically patterned organs.

    live imaging,lineage tracing, and single cell transcriptional analyses indicate that the processes of sprouting, cell fate reacquisition, and cell migration are heavily inter-twined, and are revealing general and organ-specific mechanisms.

    it might be necessary to target EC proliferation and migration in ways that were previously not appreciated when enhancing blood flow as a therapeutic aim to diseased or regenerating tissue; and that manipulating these parameters within ECs must be done with caution, because they might affect the formation of venous and arterial trees in opposite ways. Furthermore, it is now clear that genetic and hemodynamic factors interact during artery formation. However, it still needs to be determined exactly how these behaviors result in the exquisitely defined hierarchical branching of the final structure of mature arteries. These new insights are sure to be the subject of exciting studies in the near future.

     

     

  227. 227
    gpuccio says:

    Sven Mil:
    “Sounds to me like the functions of these proteins are virtually identical.”

    Not to me. Not at all. Nothing in the things you quote justifies your conclusion.

    However, if you like to think that way, it’s fine. This is a free world.

  228. 228
    Sven Mil says:

    Gpuccio, if you can’t grasp the simple fact that these proteins perform virtually identical functions, how can you expect people to believe your attempted evaluations of protein function and homology?

    Or maybe you refuse to admit this simple fact because you know that it means your “analyses” are garbage?

  229. 229
    gpuccio says:

    Sven Mil:

    “Virtually identical”? Funny indeed.

    Of course they both help starting transcription. That’s why they are “equivalent”, or have a “possible functional analogy and/or evolutionary relatedness”, or “similar roles”. In completely different organisms, having a very different transcription system, different proteins involved, different regulations.

    They have almost no sequence homology, as clearly shown by Blast, and some generic structure similarity in the DNA binding site.

    For you, that means that they have “virtually identical” functions. OK, everybody can judge what “virtually identical” means.

    For me, it’s not identical at all. Maybe very much virtual.

    And you know, I expect nothing from people, they can evaluate my facts and ideas and believe what they like.

    And, certainly, I expect nothing from you.

    Have a good day.

  230. 230
    jawa says:

    Virtual reality = reality?

    🙂

  231. 231
    bill cole says:

    Sven

    Gpuccio, if you can’t grasp the simple fact that these proteins perform virtually identical functions, how can you expect people to believe your attempted evaluations of protein function and homology?

    How would you support the claim of virtually identical functions? Maybe start by defining virtually identical. If you pass on this then I have to assume you are making a rhetorical argument only with no real scientific value.

  232. 232
    jawa says:

    Bill Cole,

    “making a rhetorical argument only with no real scientific value”

    That’s what it looks like.

  233. 233
    Sven Mil says:

    So, Gpuccio, you have to cling to this denial in order to support your design-of-the-gaps-BLASTing.
    Got it.

    The fact is, they don’t just “both help start transcription”.
    They perform the same function within the process of initiation, in fact, they both “closely approach catalytic sites indicating direct and similar roles in initiation” according to the paper you cited.
    There is only a handful of proteins that approach the RNA polymerase catalytic site in general (nevermind at the same time), and these are all associated with very specific functions (e.g. TFIIH).
    For the two proteins we are talking about (sigma and TFIIB) to both be approaching the catalytic site at the same time (during polymerase initiation), it can safely be said that their function is virtually identical.

  234. 234
    PavelU says:

    Sven Mil seems to have an interesting argument here.

  235. 235
    bill cole says:

    PavelU

    Sven Mil seems to have an interesting argument here.

    What argument do you think he is making?

  236. 236
    OLV says:

    Genome-wide enhancer annotations differ significantly in genomic distribution, evolution, and function
    Mary Lauren Benton, Sai Charan Talipineni, Dennis Kostka & John A. Capra 

    BMC Genomics    volume 20, Article number: 511 (2019)
    DOI: 10.1186/s12864-019-5779-x

    Finally, we believe that ignoring enhancer diversity impedes research progress and replication, since “what we talk about when we talk about enhancers” includes diverse sequence elements across an incompletely understood spectrum, all of which are likely important for proper gene expression. Efforts to stratify enhancers into different classes, such as poised and latent, are steps in the right direction, but are likely too coarse given our incomplete current knowledge. We suspect that a more flexible model of distal regulatory regions is appropriate, with some displaying promoter-like sequence architectures and modifications and others with distinct regulatory properties in multiple, potentially uncharacterized, dimensions. Consistent and specific definitions of the spectrum of regulatory activity and architecture are necessary for further progress in enhancer identification, successful replication, and accurate genome annotation. In the interim, we must remember that genome-wide enhancer sets generated by current approaches should be treated as what they are—incomplete snapshots of a dynamic process.

     

  237. 237
    OLV says:

    Computational Biology Solutions to Identify Enhancers-target Gene Pairs
    Judith Mary Hariprakash, Francesco Ferrari

    DOI: 10.1016/j.csbj.2019.06.012

    Enhancers are non-coding regulatory elements that are distant from their target gene. Their characterization still remains elusive especially due to challenges in achieving a comprehensive pairing of enhancers and target genes.

    We expect this field will keep evolving in the coming years due to the ever-growing availability of data and increasing insights into enhancers crucial role in regulating genome functionality.

    Enhancers are distal regulatory elements with a crucial role in controlling the expression of genes. From many point of views they are analogous to promoters, but they are located at a larger distance from the transcription start site (TSS) of the gene they regulate. Enhancers act through the binding of transcription factors just like promoters. However, elucidating the function of enhancers remains more elusive for multiple reasons.

     

  238. 238
    gpuccio says:

    Sven Mil:

    First of all, I don’t need to cling to anything to defend my procedure, because you have made no real argument against it. If and when you do, I will defend it.

    I just noticed that the idea that the two functions are virtually identical, that you stated to add some apparent poison to your rethorical non argument, is simply wrong.

    The two functions are similar, but certainly not identical, either virtually or in any other way.

    Similar is a very simple English word. Can you understand it?

    If you had said that the two functions are similar, I would have agreed with you. I have said the same thing from the beginning.

    But the two proteins are very different, even if they are distant homologues and probably evolutionarily related.

    One is specifically engineered to help initiate transcription in prokaryotes. The other one is specifically engineered to help initiate transcription in eukaryotes.

    And, as everybody knows, transcription in prokaryotes and in eukaryotes is very different.

  239. 239
    PeterA says:

    To Whom This May Concern:
    GP’s method for quantifying relatively sudden appearances of significant amounts of complex functional information within protein groups, has been extensively explained many times in this website and it’s obviously very well supported both theoretically and empirically.
    GP’s detailed explanations are freely available to anyone interested in reading and understanding them.

  240. 240
    Sven Mil says:

    Oh brother Gpuccio, let me spell it out so that even you can understand.
    These two proteins occupy the same space at the same time in their respective systems.
    Just skimming the paper you cited yourself:
    “In RNAP complexes with an open transcription bubble, sigma factors and TFIIB both closely approach catalytic sites indicating direct and similar roles in initiation.”

    “Furthermore, sigma factors and TFIIB each have multiple DNA binding helix-turn-helix (HTH) motifs”… which contain the

    “recognition helix” which is “most important for sequence recognition within the DNA major groove”

    “Here, 2-HTH motifs of bacterial sigma factors and eukaryotic TFIIB are shown to occupy homologous environments within initiating RNAP and RNAP II complexes”

    “Based on extensive apparent structural homology, amino acid sequence alignments were generated, supporting the conclusion that sigma factors, TFB and TFIIB are homologs.”
    They detect homology, why can’t you Gpucc? =)

    When modeling the structure of the entire RNA polymerase complex: “The two C-terminal sigma and TFIIB HTH motifs appear to occupy homologous positions in the structures.”

    “Remarkably, sigma CLR/HTH3.0-3.1 and TFIIB CLR/HTH1 occupy homologous positions, and sigma CLR/HTH4.1-4.2 and TFIIB CLR/HTH2 also appear to occupy homologous positions.”

    “The B-reader region approaches the RNAP II active site and, although not homologous by orientation (N?C) or sequence to sigma-Linker3.1-3.2, appears to have convergent functions in initiation and promoter escape.”

    “TFB/TFIIB CLR/HTH2 binding to BREup anchors the initiating complex on ds DNA and establishes the direction of transcription analogously to the anchoring of sigma CLR/HTH4.1-4.2 binding to the ds -35 region of the bacterial promoter”

    There’s tons more, I have filtered out most of the technical/jargony stuff for your benefit.

    A quick look at that paper makes it clear that these two proteins perform the same function.
    I can’t make it any clearer than that.
    Now either you haven’t looked at the paper, or you are clinging to your denial for the sake of your method.

    As for your method:
    You have, “blasted sigma 70 from E. coli with human TFIIB and found no detectable homology”

    So, to reiterate, you are unable to detect the relationship between these two proteins which perform the same function.
    This raises many questions and issues with respect to your analyses.
    – how much bias are you introducing into your analyses by only being able to detect high homology? (you probably have no idea)

    – how much are you missing? (I bet it’s a whole lot and also that you probably have no idea)

    – if you can only detect high homology (as you have already admitted that’s what your method does) wouldn’t you always have a jump in information?
    (the jump is due to your method being unable to detect low-mid homology; not sudden inputs of information from a designer as you love to imply)

    – how could two proteins, vastly different in sequence (according to your BLASTing) carry out the same function?
    (either your method is just not good at assessing structure/function relationships, or your assumptions about protein function in sequence space are wrong)
    (probably both)

    Hopefully that was in simple enough English for you. Can you understand it?

  241. 241
    ET says:

    Sven Mil:

    how could two proteins, vastly different in sequence (according to your BLASTing) carry out the same function?

    Easily- how can two sentences with vastly different letter sequences, carry the same message? Better yet, what is the evidence that blind and mindless processes produced either of the proteins? How can such a concept be tested?

  242. 242
    pw says:

    This discussion seems interesting, but flies high above my head.
    What are the main differences between prokaryotic and eukaryotic cells?

    I tried to search for it but got gazillion results and don’t know where to start from.

    Here are some abbreviations used in this discussion:

    BRE TFB/TFIIB recognition element
    CLR/HTH cyclin-like repeat/helix-turn-helix domain
    DPBB double psi beta barrel
    DDRP DNA-dependent RNA polymerase
    GTF general transcription factor
    LECA last eukaryotic common ancestor
    LUCA last universal common ancestor
    Ms Methanocaldococcus sp. FS406-22,
    PIF primordial initiation factor
    RDRP RNA-dependent RNA polymerase
    RNAP RNA polymerase
    Sc Saccharomyces cerevisiae
    TFB transcription factor B
    TFIIB transcription factor for RNAP II
    factor B Tt, Thermus thermophilus

  243. 243
    bill cole says:

    – how could two proteins, vastly different in sequence (according to your BLASTing) carry out the same function?

    Different binding partners in the function. The different binding partners can change the rate of transcription. You may be comparing a light switch to a light dimmer and not know it. Gpuccio’s method measures protein sequence divergence over time showing resistance to change based on purifying selection. This allows you to demonstrate substitutability and therefor genetic information. You first need to understand his method before trying to make an argument. So far you are talking over him. When you compare a eukaryotic cell to a prokaryotic cell you are using apples and oranges for your comparison and your argument fails.

  244. 244
    gpuccio says:

    Sven Mil:

    As you have tried to make your non arguments more detailed, you certainly deserve a more detailed answer. As at present I can only answer from my phone, I will be brief for the moment (I am very bad at typing on the phone). Tomorrow I should be able to answer in greater length.

    Your biggest errors (but not the only ones) are:

    a) Thinking that I am denying that the two proteins are homologues, or evolutionary related. That is completely false. I have simply blasted the two proteins, and found no detectable homology. That is a simple fact. You can blast them too, and you will have the same result. That means that there is no obvious sequence homology using the default blast algorithm. Again, that is a very simple fact. I have also said that the authors of the paper I linked had used a different method, using structural considerations and different alignment algorithms, because they were interested in detecting a weak relationship to find a possible evolutionary relationship. That’s perfectly fine, but I have no interest in affirming or denying a possible evolutionary relationship. If the two proteins are evolutionarily related, that’s no problem for me. As you know, I believe in Common Descent by design.

    b) Thinking that two similar functions are identical. I have already discussed that. Just to add a point, of course all proteins that bind DNA, and that includes all TFs, have a DBD. I don’t think that makes their functions identical, virtually or not.

    c) Thinking that I have problems with the idea that two proteins with highly different sequence can have a similar function. I have no problems with that. But the simple fact remains that in most cases proteins that retain a highly similar, maybe almost identical function through billions of years, like the alpha and beta chains of ATP synthase, show high sequence conservation. Look also at histones and ubiquitin, and thousands and thousands of other examples. Nobody who really believes in the basics of modern biology can deny that sequence conservation through long evolutionary periods is a measure of functional constraint.

    d) Thinking that I can detect only high sequence homologies. That is completely false. I use the default blast algorithm to have always the same tool in measuring sequence homology. And the default blast algorithms detects very well most sequence homologies, both low and high, and gives a definite measure of the relevance of those homologies in statistical terms, the E value. So, when I say that I could find no detectable homology, I mean a very precise fact: that blasting those two sequences, that I have clearly indicated, with the default blast algorithm, no homology is detected that reaches a significant E value. Again, you can blast the two sequences yourself. This is the method commonly used to detect homology between sequences.

    e) So, my procedure detects sequence homologies, both weak and strong. I am interested in jumps not because I can only detect jumps, as you foolishly seem to suggest, buy because jumps are clear indicators of design. I find a lot of jumps, some of them really big, and I find a lot of non jumps. As my graphics clearly show. For example, as I have argued in this same thread, TFs usually do not show big jumps, for example at the vertebrate level, for two interesting reasons:

    1) Their DBDs are highly conserved and very old, older usually than the vertebrate appearances, usually already well detectable in single celled eukaryotes.

    2) Their other domains or sequences are usually poorly conserved during the evolutionary history of metazoa. However, there are strong indications that such a sequence diversification is functional, and not simply a case of neutral variation in non functional sequences. I have made this argument here for RelA, at post #29.

    Well, that is enough for the moment.

  245. 245
    OLV says:

    Displacement of the transcription factor B reader domain during transcription initiation

    Stefan Dexl, Robert Reichelt, Katharina Kraatz, Sarah Schulz, Dina Grohmann, Michael Bartlett, Michael Thomm

    Nucleic Acids Research, Volume 46, Issue 19, Pages 10066–10081
    DOI: 10.1093/nar/gky699

    Transcription initiation by archaeal RNA polymerase (RNAP) and eukaryotic RNAP II requires the general transcription factor (TF) B/ IIB. Structural analyses of eukaryotic transcription initiation complexes locate the B-reader domain of TFIIB in close proximity to the active site of RNAP II. Here, we present the first crosslinking mapping data that describe the dynamic transitions of an archaeal TFB to provide evidence for structural rearrangements within the transcription complex during transition from initiation to early elongation phase of transcription. Using a highly specific UV-inducible crosslinking system based on the unnatural amino acid para-benzoyl-phenylalanine allowed us to analyze contacts of the Pyrococcus furiosus TFB B-reader domain with site-specific radiolabeled DNA templates in preinitiation and initially transcribing complexes. Crosslink reactions at different initiation steps demonstrate interactions of TFB with DNA at registers +6 to +14, and reduced contacts at +15, with structural transitions of the B-reader domain detected at register +10. Our data suggest that the B-reader domain of TFB interacts with nascent RNA at register +6 and +8 and it is displaced from the transcribed-strand during the transition from +9 to +10, followed by the collapse of the transcription bubble and release of TFB from register +15 onwards.

  246. 246
    OLV says:

    Design Principles Of Mammalian Transcriptional Regulation

    Transcriptional regulation occurs via changes to different biochemical steps of transcription, but it remains unclear which steps are subject to change upon biological perturbation. Single cell studies have revealed that transcription occurs in discontinuous bursts, suggesting that features of such bursts like burst fraction (what fraction of time a gene spends transcribing RNA) and burst intensity could be points of transcriptional regulation. Both how such features might be regulated and the prevalence of such modes of regulation are unclear. I first used a synthetic transcription factor to increase enhancer-promoter contact at the ?-globin locus. Increasing promoter- enhancer contact specifically modulated the burst fraction of ?-globin in both immortalized mouse and primary human erythroid cells. This finding raised the question of how generally important the phenomenon of burst fraction regulation might be, compared to other modes of regulation. For example, biochemical studies have suggested that stimuli predominantly affect the rate of RNA polymerase II (Pol II) binding and the rate of Pol II release from promoter-proximal pausing, but the prevalence of these modes of regulation compared to changes in bursting had not been examined. I combined Pol II ChIP-seq and single cell transcriptional measurements to reveal that an independently regulated burst initiation step is required before polymerase binding can occur, and that the change in burst fraction produced by increased enhancer-promoter contact was caused by an increased burst initiation rate. Using a number of global and targeted transcriptional regulatory perturbations, I showed that biological perturbations regulated both burst initiation and polymerase pause release rates, but seemed not to regulate polymerase binding rate. Our results suggest that transcriptional regulation primarily acts by changing the rates of burst initiation and polymerase pause release.

    The cells of a eukaryotic organism all share the same genome; however, they
    differentiate from a single zygote into many different cell types that carry out different functions mediated by the expression of cell-type-specific suites of proteins. A major focus of biological  science has been to understand how cells with the same genome can induce and maintain such divergent functional states. Relatedly, eukaryotic cells must be able to respond quickly to certain stimuli by changing protein expression: canonical examples of such stimuli include heat shock or inflammatory signals. Both cell-type identity and functional responses to signaling are chiefly governed at the level of DNA transcription into RNA, though other processes like protein posttranslational modification and degradation also play important roles.

    Many unanswered questions related to the regulation of transcriptional bursting persist.

     

     

  247. 247
    OLV says:

    Dynamic interplay between enhancer–promoter topology and gene activity
     

    A long-standing question in gene regulation is how remote enhancers communicate with their target promoters, and specifically how chromatin topology dynamically relates to gene activation. Here, we combine genome editing and multi-color live imaging to simultaneously visualize physical enhancer–promoter interaction and transcription at the single-cell level in Drosophila embryos. By examining transcriptional activation of a reporter by the endogenous even-skippedenhancers, which are located 150?kb away, we identify three distinct topological conformation states and measure their transition kinetics. We show that sustained proximity of the enhancer to its target is required for activation. Transcription in turn affects the three-dimensional topology as it enhances the temporal stability of the proximal conformation and is associated with further spatial compaction. Furthermore, the facilitated long-range activation results in transcriptional competition at the locus, causing corresponding developmental defects. Our approach offers quantitative insight into the spatial and temporal determinants of long-range gene regulation and their implications for cellular fates.

     

  248. 248
    OLV says:

    A genome disconnect

    Chromatin loops and domains are major organizational hallmarks of chromosomes. New work suggests, however, that these topological features of the genome are poor global predictors of gene activity, raising questions about their function.

     
    Highly rearranged chromosomes reveal uncoupling between genome topology and gene expression
     

    Chromatin topology is intricately linked to gene expression, yet its functional requirement remains unclear. Here, we comprehensively assessed the interplay between genome topology and gene expression using highly rearranged chromosomes (balancers) spanning ~75% of the Drosophila genome. Using transheterozyte (balancer/wild-type) embryos, we measured allele-specific changes in topology and gene expression in cis, while minimizing trans effects. Through genome sequencing, we resolved eight large nested inversions, smaller inversions, duplications and thousands of deletions. These extensive rearrangements caused many changes to chromatin topology, disrupting long-range loops, topologically associating domains (TADs) and promoter interactions, yet these are not predictive of changes in expression. Gene expression is generally not altered around inversion breakpoints, indicating that mis-appropriate enhancer–promoter activation is a rare event. Similarly, shuffling or fusing TADs, changing intra-TAD connections and disrupting long-range inter-TAD loops does not alter expression for the majority of genes. Our results suggest that properties other than chromatin topology ensure productive enhancer–promoter interactions.

    The plot thickens:
     
    Does rearranging chromosomes affect their function?

     

  249. 249
    OLV says:

    GP,

    The plot thickens…

    “changes in chromatin domains were not predictive of changes in gene expression. This means that besides domains, there must be other mechanisms in place that control the specificity of interactions between enhancers and their target genes.”

    More control mechanisms?

    Don’t we have enough control mechanisms to keep track of already?
    🙂

  250. 250
    OLV says:

    Biology research seems like a never-ending story:
    The more we know, more is there for us to learn from.
    Really fascinating, isn’t it?

    Shaking the dogma

    “These results question the generality of a current dogma in the field, that chromatin domains (TADs) are essential to constrain and restrict enhancer function,” says Eileen Furlong, the EMBL group leader who led the study.

    “We were able to show that major changes in the 3D organisation of the genome had surprisingly little effect on the expression of most genes, at least in this biological context. The results indicate that while some genes are affected, many appear resistant to rearrangements in their chromatin domain, and that only a small fraction of genes are sensitive to such changes in their topology.”

    Enhancers are not that promiscuous

    This raises many interesting questions in the field of chromatin topology, for example: what are these other mechanisms that control the interactions between enhancers and their target genes? Many enhancers do not appear to be promiscuous: they do not link to just any target gene, but rather have preferred partners. The team will continue to dissect this by using genetics, optogenetics (a technique to control protein activity with laser light) and single-cell approaches. This will allow them to study the impact of many more perturbations to chromatin topology in both cis and trans.

  251. 251
    gpuccio says:

    OLV:

    Thank you for the very interesting links.

    Yes, we are certainly not even near to a real understanding of how transcription is regulated.

    More on these fascinating topics as soon as I can use again a true keyboard! 🙂

  252. 252
    OLV says:

    GPuccio,

    It’s my pleasure to post links to interesting papers that sometimes I find in different journals. In some cases they may shed more light on the discussed topics.

  253. 253
    gpuccio says:

    Sven Mil, OLV and all:

    Some more facts:

    1) The archaeal TFB shows definite and highly significant sequence homology with human TFIIB. These are the results of the usual Blast alignment, always using the defaul algorithm and nothing else:

    Proteins: Human general TFIIB (Q00403) vs archaeal TFB (A0A2D6Q6B7):

    Identities: 93;
    Positives: 154;
    Bitscore: 172 bits;
    E value: 2e-56

    2) No significant sequence homology can be detected instead, using the same identical methodology, between bacterial sigma factor 70 and the archeal TFB:

    Proteins: Sigma factor 70 E. coli (P00579) vs archaeal TFB (A0A2D6Q6B7):

    Identities: 25;
    Positives: 44;
    Bitscore: 16.2 bits;
    E value: 2.0

    3) And, of course, as already said, no significant sequence homology can be detected, using the same identical methodology, between bacterial sigma factor 70 and human TFIIB:

    Proteins: Sigma factor 70 E. coli (P00579) vs Human general TFIIB (Q00403):

    Identities: 32;
    Positives: 49;
    Bitscore: 16.9 bits;
    E value: 1.4

    (plus three more non significant short alignments, with E values of 2.5, 2.7, 3.6)

    These are simple facts, that can be verified by all. At sequence level, there is a definite homology (anyway only partial) between the archaeal protein and the human protein. That corresponds to the well known concept that transcitpion initiation in archaea is much more similar to transcription initiation in eukaryotes, while in bacteria it is very different. Indeed, no significant sequence homology can be detected, always using that same methodology, between the human and the bacterial protein, or between the bacterial and the archaeal protein. These simple facts are undeniable.

    Check what I have written in my comment #202, to John_a_designer:

    “Now, in eukaryotes there are six general TFs. Archea have 3. In bacteria sigma factors have the role of general TFs. Sigma factors, archaeal general TFs and eukaryotic general TFs seem to share some homology. I think that the archaeal system, however, is much more similar to the eukaryotic system, and that includes RNA polymerases.

    While archaea are more similar to eukaryotes in the system of general TF, the regulation of transcription by one or two suppressor or activator seems to be similar to what described for bacteria.

    Finally, there is another important aspect where archaea are more similar to eukarya. Their chromatin structure is based on histones and nucleosome, like in eukaryotes, but the system is rather different from the corresponding eukaryotic system.

    Instead, bacteria have their form of DNA compression, but it is not based on histones and nucleosomes.”

  254. 254
    OLV says:

    GPuccio @253:

    Excellent explanation. Thanks!

  255. 255
    jawa says:

    This discussion is the third most visited the last 30 days!
    Definitely a fascinating topic.
    Congratulations to GP!

  256. 256
    gpuccio says:

    To all:

    It seems perfectly natural that a polymorphic semiotic system like NF-kB is strictly regulated by another universal semiotic system, the ubiquitin system. And the regulation is not simple at all, but deeply and semiotically complex:

    The Met1-Linked Ubiquitin Machinery: Emerging Themes of (De)regulation

    https://www.cell.com/molecular-cell/fulltext/S1097-2765(17)30649-4?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS1097276517306494%3Fshowall%3Dtrue

    The linear ubiquitin chain assembly complex, LUBAC, is the only known mammalian ubiquitin ligase that makes methionine 1 (Met1)-linked polyubiquitin (also referred to as linear ubiquitin). A decade after LUBAC was discovered as a cellular activity of unknown function, there are now many lines of evidence connecting Met1-linked polyubiquitin to NF-?B signaling, cell death, inflammation, immunity, and cancer. We now know that Met1-linked polyubiquitin has potent signaling functions and that its deregulation is connected to disease. Indeed, mutations and deficiencies in several factors involved in conjugation and deconjugation of Met1-linked polyubiquitin have been implicated in immune-related disorders. Here, we discuss current knowledge and recent insights into the role and regulation of Met1-linked polyubiquitin, with an emphasis on the mechanisms controlling the function of LUBAC.

    Main Text:

    Transcription factors in the nuclear factor-kB (NF-kB) family orchestrate inflammatory responses and their activation by immune receptors, such as pattern recognition receptors (PRRs), cytokine receptors, and antigen receptors, is important for innate and adaptive immune function. A unifying feature of the signaling processes triggered by these receptors is that they rely on formation of ubiquitin (Ub) chains to transmit the signal from the activated receptor to the nucleus for stimulation of NF-kB-mediated transcription (Figure 1).

    The discovery that Ub chains are required for NF-kB activation was reported more than 20 years ago with the finding that Inhibitor of NF-kB alpha (IkBalpha, also termed NFKBIA) is modified with Ub chains (linked via Lys48; Lys48-Ub) in response to receptor activation, leading to its rapid degradation via the proteasome (Chen et al., 1995, Palombella et al., 1994, Traenckner et al., 1994). Subsequently, a series of studies by Zhijian “James” Chen and colleagues showed that Ub chains linked via Lys63 (Lys63-Ub) play a non-degradative role in kinase signaling and NF-kB activation by facilitating the activation of transforming growth factor (TGF)-beta-activated kinase 1 (TAK1) (Deng et al., 2000, Wang et al., 2001). In 2006, Kazuhiro Iwai and colleagues identified a Ub E3 ligase complex that only assembles Ub chains through the N-terminal methionine (Met1-Ub); they called this the linear Ub chain assembly complex (LUBAC) and subsequently discovered that LUBAC stimulates NF-kB activity by conjugating Met1-Ub (Tokunaga et al., 2009, Kirisako et al., 2006). Now, after 10 years of research into LUBAC and Met1-Ub biology, it is clear that Met1-Ub harbors potent signaling properties and, together with Lys63-Ub and Lys48-Ub, plays a central role in NF-kB activation and immune function (Figure 2). Met1-Ub is also implicated in signaling by viral nucleotide-sensing receptors, leading to interferon response factor (IRF)-mediated transcription (Figure 1) and other signaling processes (reviewed in Sasaki and Iwai, 2015). In this review, we primarily discuss its role in NF-kB signaling.

  257. 257
    gpuccio says:

    To all:

    Oh, this is really new. Did you know that TFs seem to have a key role not only in nuclear transcription regulation, but also in the regulation of those other strange genome-bearing organelles, the mitochondria?

    Of course, NF-kB is one of the TF systems involved there, too:

    Nuclear Transcription Factors in the Mitochondria: A New Paradigm in Fine-Tuning Mitochondrial Metabolism.

    https://www.ncbi.nlm.nih.gov/pubmed/27417432

    Abstract:

    Noncanonical functions of several nuclear transcription factors in the mitochondria have been gaining exceptional traction over the years. These transcription factors include nuclear hormone receptors like estrogen, glucocorticoid, and thyroid hormone receptors: p53, IRF3, STAT3, STAT5, CREB, NF-kB, and MEF-2D. Mitochondria-localized nuclear transcription factors regulate mitochondrial processes like apoptosis, respiration and mitochondrial transcription albeit being nuclear in origin and having nuclear functions. Hence, the cell permits these multi-stationed transcription factors to orchestrate and fine-tune cellular metabolism at various levels of operation. Despite their ubiquitous distribution in different subcompartments of mitochondria, their targeting mechanism is poorly understood. Here, we review the current status of mitochondria-localized transcription factors and discuss the possible targeting mechanism besides the functional interplay between these factors.

    Emphasis mine.

    A new paradigm in fine tuning? We are becoming accustomed to that kind of thing, I suppose! 🙂

  258. 258
    gpuccio says:

    To all:

    Another rather exotic level of regulation of the NF-kB system: immunophilins.

    Regulation of NF-kB signalling cascade by immunophilins

    http://www.eurekaselect.com/131456/article

    Abstract:

    The fine regulation of signalling cascades is a key event required to maintain the appropriate functional properties of a cell when a given stimulus triggers specific biological responses. In this sense, cumulative experimental evidence during the last years has shown that high molecular weight immunophilins possess a fundamental importance in the regulation of many of these processes. It was first discovered that TPR-domain immunophilins such as FKBP51 and FKBP52 play a cardinal role, usually in an antagonistic fashion, in the regulation of several members of the steroid receptor family via its interaction with the heat-shock protein of 90-kDa, Hsp90. These Hsp90-associated cochaperones form a functional unit with the molecular chaperone influencing ligand binding capacity, receptor trafficking, and hormone-dependent transcriptional activity. Recently, it was demonstrated that the same immunophilins are also able to regulate the NF-kB signalling cascade in an Hsp90 independent manner. In this article we analize these properties and discuss the relevance of this novel regulatory pathway in the context of the pleiotropic actions managed by NF-kB in several cell types and tissues.

    Emphasis mine.

    You may rightfully ask: what are immunophilins?

    Let’s take a simple answer from Wikipedia:

    “immunophilins are endogenous cytosolic peptidyl-prolyl isomerases (PPI) that catalyze the interconversion between the cis and trans isomers of peptide bonds containing the amino acid proline (Pro). They are chaperon molecules that generally assist in the proper folding of diverse “client” proteins”.

    Here is a recent review about them:

    Biological Actions of the Hsp90-binding Immunophilins FKBP51 and FKBP52

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6406450/

    Abstract:

    Immunophilins are a family of proteins whose signature domain is the peptidylprolyl-isomerase domain. High molecular weight immunophilins are characterized by the additional presence of tetratricopeptide-repeats (TPR) through which they bind to the 90-kDa heat-shock protein (Hsp90), and via this chaperone, immunophilins contribute to the regulation of the biological functions of several client-proteins. Among these Hsp90-binding immunophilins, there are two highly homologous members named FKBP51 and FKBP52 (FK506-binding protein of 51-kDa and 52-kDa, respectively) that were first characterized as components of the Hsp90-based heterocomplex associated to steroid receptors. Afterwards, they emerged as likely contributors to a variety of other hormone-dependent diseases, stress-related pathologies, psychiatric disorders, cancer, and other syndromes characterized by misfolded proteins. The differential biological actions of these immunophilins have been assigned to the structurally similar, but functionally divergent enzymatic domain. Nonetheless, they also require the complementary input of the TPR domain, most likely due to their dependence with the association to Hsp90 as a functional unit. FKBP51 and FKBP52 regulate a variety of biological processes such as steroid receptor action, transcriptional activity, protein conformation, protein trafficking, cell differentiation, apoptosis, cancer progression, telomerase activity, cytoskeleton architecture, etc. In this article we discuss the biology of these events and some mechanistic aspects.

    In particular, section 6: “Immunophilins Regulate NF-kB Activity”

  259. 259
    OLV says:

    GPuccio,

    Definitely you’re on a roll!
    You’ve referenced several very interesting papers in a row.

  260. 260
    OLV says:

    GP @257:

    “orchestrate and fine-tune cellular metabolism at various levels of operation.”

    “A new paradigm in fine tuning? We are becoming accustomed to that kind of thing,”

    Agree.

  261. 261
    OLV says:

    GP @256:

    “It seems perfectly natural that a polymorphic semiotic system like NF-kB is strictly regulated by another universal semiotic system, the ubiquitin system. And the regulation is not simple at all, but deeply and semiotically complex:”

    Is it also natural that those semiotic systems resulted from natural selection operating on random variations over gazillion years?

    I’m looking for the literature where this is explained.

    For example, what did those systems evolve from? What were their ancestors?

  262. 262
    jawa says:

    OLV @261:

    Maybe Sven Mil can help to answer your questions, after he responds the GP’s comments addressed to him after his last comment @240?

    🙂

  263. 263
    PeterA says:

    jawa,

    That discussion is over. GP took care of it appropriately and wisely continued to provide very interesting information on the current topic.

    This thread has already exceeded my expectations.

  264. 264
    OLV says:

    GP,

    There’s so much literature on transcription regulation that it’s difficult to look at them all. Here’s just a small sample:

    (Note that you have cited some of these papers)

    Transcription-driven chromatin repression of Intragenic transcription start sites
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6373976/

    Genome-wide enhancer annotations differ significantly in genomic distribution, evolution, and function
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6585034/

    Computational Biology Solutions to Identify Enhancers-target Gene Pairs
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6611831/

    Detection of condition-specific marker genes from RNA-seq data with MGFR
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6542349/

    Enhancer RNAs: Insights Into Their Biological Role
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6505235/

    Epigenetic control of early dendritic cell lineage specification by the transcription factor IRF8 in mice
    http://www.bloodjournal.org/co.....ecked=true

    Competitive endogenous RNA is an intrinsic component of EMT regulatory circuits and modulates EMT
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6456586/

    Delta Like-1 Gene Mutation: A Novel Cause of Congenital Vertebral Malformation
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6593294/

  265. 265
    OLV says:

    GP,

    The increasing number of research papers on this OP topic definitely point to complex functional information processing systems with multiple control levels that can only result from conscious design.

    Please, I would like to read your comments on any of the papers linked @264 that you haven’t cited before. Thanks.

  266. 266
    OLV says:

    Widespread roles of enhancer-like transposable elements in cell identity and long-range genomic interactions
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6314169/

  267. 267
    Sven Mil says:

    The fact is Gpucc, you are unable to detect homology between two proteins that perform the same function and that have been shown to be homologs by other methods.
    This means your method is simply not sensitive enough (as you have already admitted) to trace the evolution of proteins back in the way that you are attempting to.
    You can detect high conservation (i.e. when a protein’s functional niche has become well-defined and locked into place evolutionarily speaking) but you are completely unable to detect the actual evolution of a protein
    And that’s why you will always find your “jumps” if you go back far enough.
    You seem smart enough that I’d bet you knew that from the start… Guess I shouldn’t really be surprised

  268. 268
    ET says:

    sven mil:

    The fact is Gpucc, you are unable to detect homology between two proteins that perform the same function and that have been shown to be homologs by other methods.

    How are using the word “homology”? Convergence explains two different proteins having the same function, as does a common design.

    You can detect high conservation (i.e. when a protein’s functional niche has become well-defined and locked into place evolutionarily speaking) but you are completely unable to detect the actual evolution of a protein

    Is there any evidence that blind and mindless processes can produce proteins? I would think that gpuccio is open to the concept of proteins evolving by means of intelligent design.

  269. 269
    PeterA says:

    GP, Sven Mil, ET, et al,
    I’m ignorant of basic biology.
    I’ve tried to understand what you’re discussing but can’t figure it out.
    Please, explain this to me in easy to understand terms:
    1. Are you comparing two proteins P1 and P2 which work for prokaryotes (P1) and eukaryotes (P2) respectively?
    2. Could P1 work for eukaryotes too?
    2.1. If YES then why wasn’t it kept in eukaryotes rather than being replaced by P2?
    3. Any idea how P1 and P2 could have appeared?
    I may have more questions, but these are fine to start.
    Note that I would like to read the answers from all of you and from other readers of this discussion.
    Thanks.

  270. 270
    gpuccio says:

    Sven Mil at #267:

    You really don’t understand, do you?

    My method (blast alignment by the default algorithm) is simply the method used routinely by almost all researchers to detect sequence homology. So, I am not doing anything particular, as you seem to believe.

    Those who are interested in detecting weak and distant homologies, of course, can use other methods, like more sensitive algorithms of alignment and structural similarities, if they like. That will give higher sensitivity and lower specificity in detecting if two proteins are homologues. IOWs, more false positives.

    As I have said many times, I am not trying to detect if two proteins are distant homologues, because that has nothing to do with my reasoning.

    The researchers you quote say that sigma factor and human TFIIB are homologues? Maybe. Maybe not. Anyway, I have no problems with that statement. If they are, they are. That makes no difference in my reasoning.

    More in next post.

  271. 271
    OLV says:

    Antisense Transcription in Loci Associated to Hereditary Neurodegenerative Diseases
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6614138/

    Natural antisense transcripts are common features of mammalian genes providing additional regulatory layers of gene expression. A comprehensive description of antisense transcription in loci associated to familial neurodegenerative diseases may identify key players in gene regulation and provide tools for manipulating gene expression.

    This work provides evidence for the existence of additional regulatory mechanisms of the expression of neurodegenerative disease-causing genes by previously not-annotated and/or not-validated antisense long noncoding RNAs.

  272. 272
    gpuccio says:

    Sven Mil (and all):

    What you really don’t understand (or simply pretend not to understand) is that I am in no way trying to “trace the evolution of proteins back”, as you seem to believe. I am trying to detect and locate in space and time the appearance of new complex functional information during the evolution of proteins, whatever their distant origin may be. That’s why I look for information jumps, in the form of new specific sequences that appear at some evolutionary time and are then conserved for hundreds of million years. I have explained the rationale for that many times.

    You say that I “will always find those jumps if I go back far enough. That’s simply not true.

    Take, for example, the case of the alpha and beta chains of ATP synthase, that I often use as an example. There is no jump there. Exact probably when those proteins first appeared, but we simply don’t know, because those proteins are present in bacteria and in all living eukaryotes. So, no jumps here: only thousands of bits of information conserved for billions of years. You just have to explain how that functional information came into existence.

    Instead, I have described a lot of functional jumps in the transitions to vertebrates: functional proteins that usually already existed, or sometimes appear de novo, and whose sequence specificity is then conserved for the next 400 million years.

    So, I detect those jumps for the simple reason that they are there. Those proteins, even if they already existed in previous deuterostomia and chordates, have been highly re-engineered in vertebrates.

    Do I “always find jumps”?

    Absolutely not. For a lot of proteins, there is no jump at the transition to vertebrates. They remain almost the same, or you can observe those weak and gradual differences that are compatible with neutral evolution. The simple reason for that is that those proteins have not been re-engineered in vertebrates, they have just kept their old function. The alpha and beta chains of ATP synthase are good examples.

    But a lot of other proteins do show big jumps at the transition to vertebrates. Those are the jumps that I have discussed in my OPs.

    IOWs, I detect jumps if they are there, and i do not detect them if they are not there. As it should be.

    IOWs, Blast as I use it is a very good tool to detect and measure sequence homology between proteins.

    As serious scientists all over the world know very well.

  273. 273
    OLV says:

    GP @270 & 272:

    Clear concise explanations. Thanks.

    Let’s hope Sven Mil gets it this time.

    Sometimes the penny doesn’t drop right away.

    🙂

  274. 274
    OLV says:

    PeterA @269:

    I’m biology-challenged too, but regarding your third question, I think most proteins are produced through gene expression: transcription, post-transcriptional modifications, translation, post-translational modifications.

  275. 275
    ET says:

    Thank you, gpuccio. “Sequence homology” is not functional similarity. Sven Mil seems to have the two confused.

  276. 276
    OLV says:

    August 4, 2019 at 1:41 am
    Antisense Transcription in Loci Associated to Hereditary Neurodegenerative Diseases
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6614138/

    Natural antisense (AS) transcripts are RNA molecules that are transcribed from the opposite DNA strand to sense (S) transcripts, partially or fully overlapping to form S/AS pairs. It is now well documented that AS transcription is a common feature of genomes from bacteria to mammals

    Thousands of lncRNA genes have been identified in mammalian genomes, with their number increasing steadily

    It is now clear that lncRNAs can regulate several biological processes, including those that underlie human diseases and yet their detailed functional characterization remains limited.

    Altogether, our results highlight the enormous complexity of gene regulation by antisense lncRNAs at any given locus

  277. 277
    OLV says:

    Regarding the confused criticism presented by Sven Mil, I feel sorry for the guy. It must feel bizarre to try so hard and get nothing out of it. Wasted effort, unless he finally understands GP’s idea. Let’s hope so. 🙂

  278. 278
    OLV says:

    This OP reminds us of this fact:

    “biological realities are, by definition, far from equilibrium states, improbable forms of order that must continuously recreate themselves, fighting against the thermodynamic disorder and the intrinsic random noise that should apparently dominate any such scenario.”

    “It is, from all points of view, amazing.”

    “Now, Paley was absolutely right. No traditional machine, like a watch, could ever originate without design.”

    “And if that is true of a watch, with its rather simple and fixed mechanisms, how much truer it must be for a system like NF-kB? Or, for that, like any cellular complex system?”

    “Do you still have any doubts?”

  279. 279
    OLV says:

    So far only a confused commenter (Sven Mil) has attempted unsuccessfully to present a counter argument.

    The last comment by Sven Mil (@267) was clearly responded @270 and @272.

    Is there another reader that would like to present contrarian arguments?

    [crickets]

  280. 280
    OLV says:

    Here’re some interesting papers cited in this OP:

    The Human Transcription Factors

    Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

    TLR-4, IL-1R and TNF-R signaling to NF-kB: variations on a common theme

    Selectivity of the NF-?B Response

    30 years of NF-?B: a blossoming of relevance to human pathobiology

    Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

    NF-kB oscillations translate into functionally related patterns of gene expression

    NF-?B Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration

  281. 281
    OLV says:

    Some papers cited in the comments:

    @3:

    Two of the papers I quote in the OP:

    Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

    and:

    NF-kB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration

    are really part of a research topic:

    Understanding Immunobiology Through The Specificity of NF-kB

    including 8 very interesting and very recent papers about NF-kB, at Frontiers in Immunology.

    Here are the titles:

    Signaling Crosstalk Mechanisms That May Fine-Tune Pathogen-Responsive NF-kB

    An NF-kB Activity Calculator to Delineate Signaling Crosstalk: Type I and II Interferons Enhance NF?B via Distinct Mechanisms

    Cellular Specificity of NF-kB Function in the Nervous System

    Immune Differentiation Regulator p100 Tunes NF-kB Responses to TNF
    Techniques for Studying Decoding of Single Cell Dynamics

    NF-kB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration (quoted in the OP)

    Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle (quoted in the OP)

    Signal Distortion: How Intracellular Pathogens Alter Host Cell Fate by Modulating NF-kB Dynamics

    +++++++

    @13:

    Signaling Crosstalk Mechanisms That May Fine-Tune Pathogen-Responsive NF?B

    @15:

    Transcription factor oscillations in neural stem cells: Implications for accurate control of gene expression

    @17:

    Introduction to the Thematic Minireview Series: Chromatin and transcription

    @20:

    Cellular Specificity of NF-?B Function in the Nervous System

    @21:

    Transcriptional Control of Synaptic Plasticity by Transcription Factor NF-kB

    @29:

    Single-molecule dynamics and genome-wide transcriptomics reveal that NF-kB (p65)-DNA binding times can be decoupled from transcriptional activation

    @52:

    Lnc-ing inflammation to disease

    @67

    Long non-coding RNA: a versatile regulator of the nuclear factor-kB signalling circuit

    @96

    The Impact of Transposable Elements in Genome Evolution and Genetic Instability and Their Implications in Various Diseases

    @131

    Population Genomics Reveal Recent Speciation and Rapid Evolutionary Adaptation in Polar Bears

    @133

    Genetic diversity of CHC22 clathrin impacts its function in glucose metabolism

    Environmental contaminants modulate the transcriptional activity of polar bear (Ursus maritimus) and human peroxisome proliferator-activated receptor alpha (PPARA)

    Evolutionary history and palaeoecology of brown bear in North-East Siberia re-examined using ancient DNA and stable isotopes from skeletal remains

    @139

    Polar bear evolution is marked by rapid changes in gene copy number in response to dietary shift

    @192

    Crosstalk between NF-kB and Nucleoli in the Regulation of Cellular Homeostasis

    The Crosstalk of Endoplasmic Reticulum (ER) Stress Pathways with NF-kB: Complex Mechanisms Relevant for Cancer, Inflammation and Infection.

    @193

    Transcription factor NF-kB in a basal metazoan, the sponge, has conserved and unique sequences, activities, and regulation

    @194

    On chaotic dynamics in transcription factors and the associated effects in differential gene regulation

  282. 282
    OLV says:

    Off topic:

    When I read a paper that mentions proteins automatically GP’s quantitative method comes to mind. 🙂

    In the following text several proteins are mentioned.

    The given article claims that most of them are very conserved through numerous biological systems.

    I wonder what are the genetic regulatory mechanisms associated with the mentioned proteins.

    Here’s the text:

    In essence, SAC is a cellular signaling pathway. Multiple mitotic kinases and their substrates are involved in this signaling. Therefore, the correct position of specific kinases to its substrates is of great importance for the functional integrity of the SAC. We envision the kinetochore localization of SAC factors may serve several functions. First, the kinetochore localization of Mps1 kinase (and Bub1, Plk1 kinase and CDK1-Cyclin B) positions the kinase close to their substrates (i.e., Knl1). Second, the kinetochore localization of Bub1 serves as a scaffold to recruit its downstream factors such as BubR1, Mad1/Mad2 and RZZ. Last, the kinetochore localization of Mps1 and Bub1 may facilitate their own activation due to the higher local concentration at kinetochore.

    The hierarchical recruitment pathway of SAC is becoming elucidated gradually. In brief, Aurora B activity boosts the kinetochore recruitment and activation of Mps1. Then, Mps1 phosphorylates Knl1, and in turn, phosphorylated Knl1 recruits Bub1/Bub3. Bub1 works as a scaffold to recruit BubR1/Bub3, Mad1/Mad2, RZZ and Cdc20. Despite important progress, many outstanding questions remain. For example, an exact molecular delineation of how Aurora B activity and ARHGEF17 promote Mps1 kinetochore recruitment remains elusive. Future studies to address these questions will definitely deepen our understanding on SAC signaling. Advanced protein structural analyses, protein-protein interaction interface delineation and protein localization dynamics analyses using super-resolution imaging tool combination with optogenetic operation will pave our way in future.

    Recent Progress on the Localization of the Spindle Assembly Checkpoint Machinery to Kinetochores

    Cells. 2019 Mar; 8(3): 278.
    doi: 10.3390/cells8030278

  283. 283
    OLV says:

    In response to the comment @279 we hear only the sound of silence.
    🙂

  284. 284
    jawa says:

    OLV @279:

    You should be patient. Sven Mil is probably a very busy scientist, hence he can’t comment here on demand. You have to wait. It’s possible he’s related to professors Art Hunt or Larry Moran. 🙂

  285. 285
    OLV says:

    The Human Transcription Factors

    Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

    TLR-4, IL-1R and TNF-R signaling to NF-kB: variations on a common theme

    Selectivity of the NF-?B Response

    30 years of NF-?B: a blossoming of relevance to human pathobiology

    Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

    NF-kB oscillations translate into functionally related patterns of gene expression

    NF-?B Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration

    30 years of NF-?B: a blossoming of relevance to human pathobiology

    +++++++

    @3:

    Two of the papers I quote in the OP:

    Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

    and:

    NF-kB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration

    are really part of a research topic:

    Understanding Immunobiology Through The Specificity of NF-kB

    including 8 very interesting and very recent papers about NF-kB, at Frontiers in Immunology.

    Here are the titles:

    Signaling Crosstalk Mechanisms That May Fine-Tune Pathogen-Responsive NF-kB

    An NF-kB Activity Calculator to Delineate Signaling Crosstalk: Type I and II Interferons Enhance NF?B via Distinct Mechanisms

    Cellular Specificity of NF-kB Function in the Nervous System

    Immune Differentiation Regulator p100 Tunes NF-kB Responses to TNF
    Techniques for Studying Decoding of Single Cell Dynamics

    NF-kB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration (quoted in the OP)

    Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle (quoted in the OP)

    Signal Distortion: How Intracellular Pathogens Alter Host Cell Fate by Modulating NF-kB Dynamics

    +++++++

    @13:

    Signaling Crosstalk Mechanisms That May Fine-Tune Pathogen-Responsive NF?B

    @15:

    Transcription factor oscillations in neural stem cells: Implications for accurate control of gene expression

    @17:

    Introduction to the Thematic Minireview Series: Chromatin and transcription

    @20:

    Cellular Specificity of NF-?B Function in the Nervous System

    @21:

    Transcriptional Control of Synaptic Plasticity by Transcription Factor NF-kB

    @29:

    Single-molecule dynamics and genome-wide transcriptomics reveal that NF-kB (p65)-DNA binding times can be decoupled from transcriptional activation

    @52:

    Lnc-ing inflammation to disease

    @67

    Long non-coding RNA: a versatile regulator of the nuclear factor-kB signalling circuit

    @96

    The Impact of Transposable Elements in Genome Evolution and Genetic Instability and Their Implications in Various Diseases

    @131

    Population Genomics Reveal Recent Speciation and Rapid Evolutionary Adaptation in Polar Bears

    @133

    Genetic diversity of CHC22 clathrin impacts its function in glucose metabolism

    Environmental contaminants modulate the transcriptional activity of polar bear (Ursus maritimus) and human peroxisome proliferator-activated receptor alpha (PPARA)

    Evolutionary history and palaeoecology of brown bear in North-East Siberia re-examined using ancient DNA and stable isotopes from skeletal remains

    @139

    Polar bear evolution is marked by rapid changes in gene copy number in response to dietary shift

    @192

    Crosstalk between NF-kB and Nucleoli in the Regulation of Cellular Homeostasis

    The Crosstalk of Endoplasmic Reticulum (ER) Stress Pathways with NF-kB: Complex Mechanisms Relevant for Cancer, Inflammation and Infection.

    @193

    Transcription factor NF-kB in a basal metazoan, the sponge, has conserved and unique sequences, activities, and regulation

    @194

    On chaotic dynamics in transcription factors and the associated effects in differential gene regulation

    @209

    Conservation and divergence of p53 oscillation dynamics across species

    @220

    The functional analysis of MicroRNAs involved in NF-kB signaling.

    @222

    gga-miR-146c Activates TLR6/MyD88/NF-?B Pathway through Targeting MMP16 to Prevent Mycoplasma Gallisepticum (HS Strain) Infection in Chickens

    Temporal characteristics of NF-?B inhibition in blocking bile-induced oncogenic molecular events in hypopharyngeal cells

    @223

    The Regulation of NF-?B Subunits by Phosphorylation

    The Ubiquitination of NF-?B Subunits in the Control of Transcription

    A Role for NF-?B in Organ Specific Cancer and Cancer Stem Cells

    @226

    Veins and Arteries Build Hierarchical Branching Patterns Differently: Bottom-Up versus Top-Down

    @236

    Genome-wide enhancer annotations differ significantly in genomic distribution, evolution, and function

    @237

    Computational Biology Solutions to Identify Enhancers-target Gene Pairs

    @245

    Displacement of the transcription factor B reader domain during transcription initiation

    @246

    Design Principles Of Mammalian Transcriptional Regulation

    @247

    Dynamic interplay between enhancer–promoter topology and gene activity

    @248

    A genome disconnect

    Highly rearranged chromosomes reveal uncoupling between genome topology and gene expression

    Does rearranging chromosomes affect their function?

    @256

    The Met1-Linked Ubiquitin Machinery: Emerging Themes of (De)regulations

    @257

    Nuclear Transcription Factors in the Mitochondria: A New Paradigm in Fine-Tuning Mitochondrial Metabolism.

    @258

    Regulation of NF-kB signalling cascade by immunophilins

    Biological Actions of the Hsp90-binding Immunophilins FKBP51 and FKBP52

    @264

    Transcription-driven chromatin repression of Intragenic transcription start sites

    Genome-wide enhancer annotations differ significantly in genomic distribution, evolution, and function

    Computational Biology Solutions to Identify Enhancers-target Gene Pairs

    Detection of condition-specific marker genes from RNA-seq data with MGFR

    Enhancer RNAs: Insights Into Their Biological Role

    Epigenetic control of early dendritic cell lineage specification by the transcription factor IRF8 in mice

    Competitive endogenous RNA is an intrinsic component of EMT regulatory circuits and modulates EMT

    Delta Like-1 Gene Mutation: A Novel Cause of Congenital Vertebral Malformation

    @266

    Widespread roles of enhancer-like transposable elements in cell identity and long-range genomic interactions

    @271

    Antisense Transcription in Loci Associated to Hereditary Neurodegenerative Diseases

    @282

    Recent Progress on the Localization of the Spindle Assembly Checkpoint Machinery to Kinetochores

  286. 286
    OLV says:

    GP,

    Off topic: the plot thickens…

    Another ubiquitin-related stuff? 🙂

    How Does SUMO Participate in Spindle Organization?
    https://www.mdpi.com/2073-4409/8/8/801

    The ubiquitin-like protein SUMO is a regulator involved in most cellular mechanisms. Recent studies have discovered new modes of function for this protein. Of particular interest is the ability of SUMO to organize proteins in larger assemblies, as well as the role of SUMO-dependent ubiquitylation in their disassembly. These mechanisms have been largely described in the context of DNA repair, transcriptional regulation, or signaling, while much less is known on how SUMO facilitates organization of microtubule-dependent processes during mitosis. Remarkably however, SUMO has been known for a long time to modify kinetochore proteins, while more recently, extensive proteomic screens have identified a large number of microtubule- and spindle-associated proteins that are SUMOylated. The aim of this review is to focus on the possible role of SUMOylation in organization of the spindle and kinetochore complexes. We summarize mitotic and microtubule/spindle-associated proteins that have been identified as SUMO conjugates and present examples regarding their regulation by SUMO. Moreover, we discuss the possible contribution of SUMOylation in organization of larger protein assemblies on the spindle, as well as the role of SUMO-targeted ubiquitylation in control of kinetochore assembly and function. Finally, we propose future directions regarding the study of SUMOylation in regulation of spindle organization and examine the potential of SUMO and SUMO-mediated degradation as target for antimitotic-based therapies.

  287. 287
    gpuccio says:

    OLV:

    Yes, SUMO is a very interesting “side actor” in the already extremely complex ubiquitin system! 🙂

    By the way, thank you for the very detailed summaries, my friend. 🙂

  288. 288
    bill cole says:

    Also, as far as I understand sumo tags must be cleaved (removed) prior to ubiquitin guided protein degradation.

  289. 289
    OLV says:

    GP @287:

    My pleasure.

  290. 290
    OLV says:

    Bill Cole @288:

    Any idea how is that cleaving mechanism established and activated?

  291. 291
    OLV says:

    NF-kB

    Hepatoprotective Effects of Morchella esculenta against Alcohol-Induced Acute Liver Injury in the C57BL/6 Mouse Related to Nrf-2 and NF-kB Signaling

    A novel curcumin analog inhibits canonical and non-canonical functions of telomerase through STAT3 and NF-kB inactivation in colorectal cancer cells

    Validation of the prognostic value of NF-kB p65 in prostate cancer: A retrospective study using a large multi-institutional cohort of the Canadian Prostate Cancer Biomarker Network

  292. 292
    bill cole says:

    Olv
    It is a cleaving enzyme so it is transcribed at some interval. If it does not work properly it can be potentially responsible for certain diseases. As far as I can tell regulation is from transcription rates. Gpuccio, do you agree?

  293. 293
  294. 294
    OLV says:

    Bill Cole,
    Thanks.

  295. 295
    OLV says:

    The NF-kB Signaling is quite simple. 🙂

  296. 296
  297. 297
    jawa says:

    Popular Posts (Last 30 Days)
    Now Steve Pinker is getting #MeToo’d, at Inside… (2,614)
    Controlling the waves of dynamic, far from… (2,330)
    Atheism’s problem of warrant (–>… (1,850)
    Chemist James Tour calls time out on implausible… (1,209)
    Are extinctions evidence of a divine purpose in life? (1,196)

  298. 298
    OLV says:

    NF-kB is all over the map. 🙂

    It’s funny that before this OP I didn’t notice this NF-kB, but now it seems to pop up in many papers.

  299. 299
    PeterA says:

    I like the poetic way this OP ends:

    Is this the working of a machine?

    Of course it is! Because the results are purposeful, reasonably robust and reliable, and govern a lot of complex processes with remarkable elegance and efficiency.

    But certainly, it is not a traditional machine. It is a lot more complex. It is a lot more beautiful and flexible.

    It works with biological realities and not with transistors and switches. And biological realities are, by definition, far from equilibrium states, improbable forms of order that must continuously recreate themselves, fighting against the thermodynamic disorder and the intrinsic random noise that should apparently dominate any such scenario.

    It is more similar to a set of extremely clever surfers who succeed in performing elegant and functional figures and motions in spite of the huge contrasting waves.

    It is, from all points of view, amazing.

    Now, Paley was absolutely right. No traditional machine, like a watch, could ever originate without design.

    And if that is true of a watch, with its rather simple and fixed mechanisms, how much truer it must be for a system like NF-kB? Or, for that, like any cellular complex system?

    Do you still have any doubts?

    I would add another question: any objection?

  300. 300
    jawa says:

    PeterA,

    Perhaps Sven Mill will answer all those questions next time he comes back to respond GP’s comments @270 and @272. 🙂

    Maybe Dr Art Hunt or Dr Larry Moran could assist Sven Mil to write a coherent objection to your comment @299. 🙂

    Just wait… be patient. 🙂

  301. 301
  302. 302
  303. 303
    PeterA says:

    GP @2:

    “It should be rather obvious that, if the true purpose of biological beings were to achieve the highest survival and fitness, as neo-darwinists believe, life should have easily stopped at prokaryotes.”

    That’s an interesting observation indeed.

    Far from stopping at the comfortable fitness level of prokaryotes, evolution produced a mind boggling information jump to eukaryotes!
    How come? How can one explain that?
    If my children and grandchildren ask me why and how that phenomenal jump occurred, what should I tell them?

  304. 304
    PavelU says:

    PeterA:

    “If my children and grandchildren ask me why and how that phenomenal jump occurred, what should I tell them?”

    Tell them that it’s widely accepted that it all resulted from long evolutionary processes, mainly RV+NS.

  305. 305
    ET says:

    PavelU:

    Tell them that it’s widely accepted that it all resulted from long evolutionary processes, mainly RV+NS.

    Why tell them a lie?

  306. 306
    PavelU says:

    ET,
    That’s what is written in the textbooks. Are you implying that the textbooks are incorrect? Really?
    There’s abundant literature supporting RV+NS.

  307. 307
    ET says:

    PavelU- There isn’t any literature supporting the claim that NS, which includes RV, can do anything beyond merely changing allele frequency over time, within a population. Speculation based on the assumption abounds in textbooks. But no one knows how to test the claim that NS, drift or any other blind and mindless process can actually do as advertised.

    That is why probability arguments exist. There isn’t any actual data, observations or experiments to support it.

    If the textbooks claim otherwise then they are promoting lies, falsehoods and misrepresentations.

  308. 308
    jawa says:

    ET,

    I like your comment. Good point. But I doubt PavelU will understand it, because the poor guy seems oblivious. He should wake up and smell the flowers in the garden. 🙂

  309. 309
    ET says:

    Jawa- “Their” argument is and always has been “that X exists and we know (wink, wink) it wasn’t via intelligent design. It’s just a matter of time before we figure it all out.” It does make for a nice narrative, though. I was impressed when I went to the Smithsonian and saw the short movie on how life’s diversity arose. But it all seemed so Lamarckian, though, as it still does. They always talk about physical transformations without any discussion of the mechanisms capable of carrying them out. There is never any genetic link.

    And that is very telling

  310. 310
    PavelU says:

    ET,

    Here’s something for you and your friends to learn from before you write your next comment:

    A New Clue to How Life Originated
    A long-standing mystery about early cells has a solution—and it’s a rather magical one.

    https://www.theatlantic.com/science/archive/2019/08/interlocking-puzzle-allowed-life-emerge/595945/

  311. 311
    bornagain77 says:

    Last part of PavelU’s cited article:

    She’s now looking into what happens after the protocells assemble. Sure, there’s a compartment that contains the building blocks for making proteins and RNA. “But how do those individual building blocks bond to form the larger molecules?” she says. “It’s a very hard question.”

    To wit:

    “We have no idea how to put this structure (a simple cell) together.,, So, not only do we not know how to make the basic components, we do not know how to build the structure even if we were given the basic components. So the gedanken (thought) experiment is this.
    Even if I gave you all the components. Even if I gave you all the amino acids. All the protein structures from those amino acids that you wanted. All the lipids in the purity that you wanted. The DNA. The RNA. Even in the sequence you wanted. I’ve even given you the code. And all the nucleic acids. So now I say, “Can you now assemble a cell, not in a prebiotic cesspool but in your nice laboratory?”. And the answer is a resounding NO! And if anybody claims otherwise they do not know this area (of research).”
    – James Tour: The Origin of Life Has Not Been Explained – 4:20 minute mark (The more we know, the worse the problem gets for materialists)
    https://youtu.be/r4sP1E1Jd_Y?t=255

    Origin of Life: An Inside Story – Professor James Tour – May 1, 2016
    Excerpt: “All right, now let’s assemble the Dream Team. We’ve got good professors here, so let’s assemble the Dream Team. Let’s further assume that the world’s top 100 synthetic chemists, top 100 biochemists and top 100 evolutionary biologists combined forces into a limitlessly funded Dream Team. The Dream Team has all the carbohydrates, lipids, amino acids and nucleic acids stored in freezers in their laboratories… All of them are in 100% enantiomer purity. [Let’s] even give the team all the reagents they wish, the most advanced laboratories, and the analytical facilities, and complete scientific literature, and synthetic and natural non-living coupling agents. Mobilize the Dream Team to assemble the building blocks into a living system – nothing complex, just a single cell. The members scratch their heads and walk away, frustrated…
    So let’s help the Dream Team out by providing the polymerized forms: polypeptides, all the enzymes they desire, the polysaccharides, DNA and RNA in any sequence they desire, cleanly assembled. The level of sophistication in even the simplest of possible living cells is so chemically complex that we are even more clueless now than with anything discussed regarding prebiotic chemistry or macroevolution. The Dream Team will not know where to start. Moving all this off Earth does not solve the problem, because our physical laws are universal.
    You see the problem for the chemists? Welcome to my world. This is what I’m confronted with, every day.“
    James Tour – leading Chemist
    http://www.uncommondescent.com.....nt-design/

    August 2019- Evidence from Quantum Biology for God being behind life
    https://uncommondescent.com/intelligent-design/if-you-can-reproduce-how-life-got-started-10-million-is-yours/#comment-681958

  312. 312
    ET says:

    Wow, PavelU- I had just finished reading that article about a half hour ago. You do realize that not just any membrane will do, right? You have to get nutrients in and waste out. You also have to be able to communicate with the different compartments. But most of all, without some internal replication mechanism, nothing will ever come from lipid bubbles with amino acids.

    But yes, it is all interesting stuff an shows how desperate some people are.

  313. 313
    ET says:

    But I digress- This at least seems to produce another catch-22. Lipid bubbles can’t survive without amino acids and the molecules of life cannot survive without some environmental barrier. And lipid bubbles present such a barrier.

    So a cytoplasm filled with amino acids- to some extent- would create a stable barrier along with the raw materials needed to produce proteins. But not just any protein will do. And the method of producing them will be too slow to be effective- if it’s even capable.

    barriers pores pumps and gates

    A simple, completely exclusive barrier between the intracellular and extracellular environments is not by itself much use in homeostasis. The barrier with the outside world must allow in those things necessary or growth and development whilst excluding everything else. In short it must be selectively permeable. [b]Pure lipid would be impermeable to most water-soluble substances so a plasma membrane contains channels and pores built from protein molecules to enable selected substances to enter (or leave) the cell.[/b] A pore or channel is a protein with a hydrophobic (water hating, lipid loving) exterior which can sit happily in the membrane and a hydrophilic (water loving) centre through which water and small water soluble molecules can pass. If such a molecule is inserted into a plasma membrane so that one end sticks out of the cell and the other end sticks out into the cell interior water soluble compounds can cross the membrane without ever coming into contact with the lipid. A plasma membrane almost always includes water, K+, Cl- and Ca2+ channels; many cell types also have Na+ channels. Ion channels are also (usually) gated i.e. they may be open or closed.

    From a design standpoint this all makes sense- this foundational requirement- the ready made selectively permeable membrane. It has a cytoplasm teeming with amino acids for structural support of that membrane. And they are also raw materials for making proteins. The proteins used in creating the pores and channels.

  314. 314
  315. 315
    OLV says:

    In the papers cited @314 note the following topics associated with the current OP:

    NF-kB down-regulation

    NF-kB up-regulation

    NF-kB activation

    NF-kB inactivation

    Inhibition of NF-kB Signaling Pathway

  316. 316
    OLV says:

    GP,
    Please, help me with this:

    A lesson in homology ?
    https://doi.org/10.7554/eLife.48335
    https://elifesciences.org/articles/48335#x339477c1

    The same genes and signaling pathways control the formation of limbs in vertebrates, arthropods and cuttlefish.

    Does that mean that something else is involved in this complex process besides the genes and the signaling pathways ?

    Thanks.

  317. 317
    OLV says:

    GP,

    Here’s a major hint to answer the question @316:

    According to the ‘co-option hypothesis’, a developmental program evolved in the common ancestor of the bilaterians – a group that includes most animals except for primitive forms like sponges – to shape an appendage that later disappeared during evolution

    I’m beginning to like that cool co-option idea.

    🙂

    Have you seen any coherent explanation of how it all works ?

    Could this be a potential topic for a future OP?

  318. 318
    gpuccio says:

    OLV at #316-317:

    I had a look at the paper. I am nor sure what the problem is.

    It is not surprising, IMO, that some master TFs have an important role in the spacial definition of limbs and appendages, in different types of animals. What’s the problem there? Establishing three dimensional axes seems to be a very basic engineering tool, I am not suprised at all that it is basicallty implemented by the same TF families in different beings.

    Of course, that does not explain at all the differences between limbs: those must be explained by other types of information, other genes or other epigenetic networks.

    The establishment of axes and simmetries is one thing. The morphological definition of limbs is another thing.

    It is rather clear that biological engineering takes place at different, well ordered levels. Some functions remain similar and are conserved, others neeed completely new implementations to generate diversity of function.

    I am not sure of the supposed role of “cooption” in all this.

  319. 319
    gpuccio says:

    To all:

    At #118 I have mentioned the complexity of the CBM signalosome, whose role in T-cell receptor (TCR)-mediated T-cell activation is fundamental. I have also mentioned how CARD11 is a wonderful example of a very big and complex protein exhibiting a huge information jump in vertebrates (see also the additional figure at the end of the OP).

    Well, here is another very recent paper about CARD11:

    Coordinated regulation of scaffold opening and enzymatic activity during CARD11 signaling.

    https://www.ncbi.nlm.nih.gov/pubmed/31391255

    Abstract:

    The activation of key signaling pathways downstream of antigen receptor engagement is critically required for normal lymphocyte activation during the adaptive immune response. CARD11 is a multidomain signaling scaffold protein required for antigen receptor signaling to NF-?B, c-Jun N-terminal Kinase (JNK), and mTOR. Germline mutations in the CARD11 gene result in at least four types of primary immunodeficiency, and somatic CARD11 gain-of-function mutations drive constitutive NF-?B activity in Diffuse Large B Cell Lymphoma and other lymphoid cancers. In response to antigen receptor triggering, CARD11 transitions from a closed, inactive state to an open, active scaffold that recruits multiple signaling partners into a complex to relay downstream signaling. However, how this signal-induced CARD11 conversion occurs remains poorly understood. Here we investigate the role of IE1, a short regulatory element in the CARD11 Inhibitory Domain, in the CARD11 signaling cycle. We find that IE1 controls the signal-dependent Opening Step that makes CARD11 accessible to the binding of cofactors, including Bcl10, MALT1, and the HOIP catalytic subunit of the Linear Ubiquitin Chain Assembly Complex. Surprisingly, we find that IE1 is also required at an independent step for the maximal activation of HOIP and MALT1 enzymatic activity after cofactor recruitment to CARD11. This role of IE1 reveals that there is an Enzymatic Activation Step in the CARD11 signaling cycle that is distinct from the Cofactor Association Step. Our results indicate that CARD11 has evolved to actively coordinate scaffold opening and the induction of enzymatic activity among recruited cofactors during antigen receptor signaling.

    Interesting. Coordinated regulation. Actively coordinate scaffold opening and the induction of enzymatic activity. See also the very interesting Fig. 6.

    Oh, and please note the use of the words “has evolved to” in the end, just to mean “is able to”. 🙂

  320. 320
    OLV says:

    GP @319:

    please note the use of the words “has evolved to” in the end, just to mean “is able to”.

    🙂

  321. 321
    OLV says:

    GP @318:
    Excellent explanation, as usual. Thanks.

  322. 322
    OLV says:

    GP @319:

    Coordinated regulation. Actively coordinate scaffold opening and the induction of enzymatic activity.

    All that resulted from RV+NS+T?

    🙂

  323. 323
    OLV says:

    That’s a very interesting paper that GP posted @319.

    Here’s the link again for those who don’t want to scroll up to GP’s original comment:

    http://m.jbc.org/content/early.....d=31391255

  324. 324
    OLV says:

    Structures of autoinhibited and polymerized forms of CARD9 reveal mechanisms of CARD9 and CARD11 activation
    Nat Commun. 2019; 10: 3070.
    doi: 10.1038/s41467-019-10953-z
    https://www.nature.com/articles/s41467-019-10953-z

    While significant questions remain, in particular in understanding the regulatory role of the larger coiled-coil domain, our study provides a number of critical steps towards a full structural description of regulation within the protein family.

  325. 325
    OLV says:

    Communication codes in developmental signaling pathways
    Pulin Li, Michael B. Elowitz
    Development 2019 146: dev170977
    doi: 10.1242/dev.170977

    A handful of core intercellular signaling pathways play pivotal roles in a broad variety of developmental processes. It has remained puzzling how so few pathways can provide the precision and specificity of cell-cell communication required for multicellular development. Solving this requires us to quantitatively understand how developmentally relevant signaling information is actively sensed, transformed and spatially distributed by signaling pathways. Recently, single cell analysis and cell-based reconstitution, among other approaches, have begun to reveal the ‘communication codes’ through which information is represented in the identities, concentrations, combinations and dynamics of extracellular ligands. They have also revealed how signaling pathways decipher these features and control the spatial distribution of signaling in multicellular contexts. Here, we review recent work reporting the discovery and analysis of communication codes and discuss their implications for diverse developmental processes.

    “communication codes” ?

  326. 326
    OLV says:

    Deletion of NFKB1 enhances canonical NF-kB signaling and increases macrophage and myofibroblast content during tendon healing

    https://www.nature.com/articles/s41598-019-47461-5

    better understanding the downstream processes mediated by NF-kB signaling could reveal candidate pathways that could be viably targeted by therapeutics.

  327. 327
    jawa says:

    This is interesting:

    Popular Posts (Last 30 Days)

    1. Now Steve Pinker is getting #MeToo’d, at Inside… (2,534)
    Visited 2,679 times, 32 visits today
    Posted on July 17
    1 comment

    2. Controlling the waves of dynamic, far from… (1,292)
    Visited 2,543 times, 79 visits today
    Posted on July 10
    326 comments

  328. 328
    jawa says:

    Hey!
    Has anybody seen Sven Mill lately?
    Will he ever come back to respond GP’s comments @270 and @272?
    Did he run out of objections?
    Maybe Dr Art Hunt or Dr Larry Moran could assist him with writing a coherent counterargument ?
    🙂

  329. 329
    jawa says:

    This is interesting, isn’t it?

    Popular Posts (Last 30 Days)

    Controlling the waves of dynamic, far from… (1,304)
    (Visited 2,758 times, 250 visits today) Jul 10, 329 replies

    Are extinctions evidence of a divine purpose in life? (1,272)
    (Visited 1,272 times, 37 visits today). Aug 4, 11 replies

    Chemist James Tour calls time out on implausible… (1,140)
    (Visited 1,238 times, 9 visits today) Aug 19, 16 replies

    Apes and humans: How did science get so detached… (959)

    “Descartes’ mind-body problem” makes nonsense of materialism (947)

  330. 330
    gpuccio says:

    Jawa at #329:

    Thank you for the statistics!

    It’s good to see that the thread is still going rather well, even if I have been rather busy with other things. 🙂

  331. 331
    gpuccio says:

    OLV at #325:

    Interesting paper.

    Indeed, communication between cells, often very distant and different cells in multicellular organisms, requires at least three different levels of coding:

    1) The message: a specific requirement to be sent to the target cells from the cells that originate the signals. This is a symbolic coding, because of course the “messenger” molecules, be they hormones, cytokines, or anything else, have really nothing to do with the response they are destined to evoke in the end. They are symbolic messengers, and nothing else. Moreover, the coding implies not only the type of messenger molecules, but also their concentration, distribution in the organism, and possibly modifications or interactions with other structures, as we have seen for example in the OP dedicated to the extracellular fluid.

    2) First decoding step and transmission to the nucleus. This is usually a very complex step, where multiple levels of decoding interact in an extremely articulated way, often implying a lot of control of random noise and chaotic components, as seen in this thread. Moreover, many layers are superimposed here, starting from membrane receptors, their modulations, their immediate translation systems, and then the more complex pathways that translate the partially decoded message to the nuclear environment. Please note that at this level the message has already been partially decoded, but is still transmitted in rather symbolic form, usually as chains of molecular interactions that can assume multiple configurations and forms. The NF-kB system described in the OP is a very good example, with its many semiotic polymorphisms.

    3) Finally, the ultimate decoding takes place in the nucleus, where the complex codes and subcodes of TFs, with their manifold interactions and tweakings, must in some way transform the initial message into an effective modulation, in space and time and intensity, of the transcription of multiple specific genes (often hundreds, or even thousands of them). The final result in cell behaviour modifications will be the controlled, and usually very efficient, consequence of the original message started by the activity of many distant cells in the organism.

    All that is certainly beautiful and fascinating. But also amazing. Very much indeed.

  332. 332
    gpuccio says:

    To all:

    Our good friend, the CBM signalosome, discussed in some detail in the OP and in the thread, has recently been the object of a very interesting “Research Topic” in Frontiers in Immunology.

    Here is the link to the 15 articles:

    Research Topic: CARMA Proteins: Playing a Hand of Four CARDs

    https://www.frontiersin.org/research-topics/6853/carma-proteins-playing-a-hand-of-four-cards#articles

    And here is the Editorial:

    Editorial: CARMA Proteins: Playing a Hand of Four CARDs

    https://www.frontiersin.org/articles/10.3389/fimmu.2019.01217/full

    A few thoughts:

    Over the past 20 years, enhanced analyses of tumor-specific genomic alterations coupled with elegant biochemical approaches have helped to map essential signaling pathways in healthy and malignant cells. For example, the B cell lymphoma/leukemia 10 (BCL10) gene was identified in 1999 from a recurrent chromosomal translocation noted in non-Hodgkin lymphomas that arise in mucosa-associated lymphoid tissue (MALT). These studies demonstrated that BCL10 could oligomerize via its caspase recruitment domain (CARD) and induce robust activation of nuclear factor of kappaB (NF-?B), a critical family of transcription factors first implicated in lymphocyte differentiation. In <3 years, multiple groups discovered that BCL10 must partner with one of four CARD-containing scaffold proteins (CARD9, CARD10, CARD11, and CARD14) and the MALT1 paracaspase (also discovered from a MALT lymphoma-derived chromosomal translocation) to drive NF-?B signaling in response to various receptor-mediated inputs.

    Although we now appreciate that tissue-specific expression of CARMA/CARD proteins dictates function and underlying pathology of associated diseases, this collection also underscores the ubiquity and significance of the CBM signalosome as a central governor of receptor-mediated signaling to NF-?B and additional outputs important for cell proliferation, survival, differentiation, and function.

    Very interesting.

  333. 333
    OLV says:

    GP @332:

    Our good friend, the CBM signalosome, discussed in some detail in the OP and in the thread, has recently been the object of a very interesting “Research Topic” in Frontiers in Immunology.

    This is very interesting indeed.

    Thanks!

Leave a Reply