Intelligent Design

Controlling the waves of dynamic, far from equilibrium states: the NF-kB system of transcription regulation.

Spread the love

I have recently commented on another thread:

about a paper that (very correctly) describes cells as dynamic, far from equilibrium systems, rather than as “traditional” machines.

That is true. But, of course, the cell implements the same functions as complex machines do, and much more. My simple point is that, to do that, you need much greater functional complexity than you need to realize a conventional machine.

IOWs, dynamic, far from equilibrium systems that can be as successful as a conventional machine, or more, must certainly be incredibly complex and amazing systems, systems that defy everything else that we already know and that we can conceive. They must not only implement their functional purposes, but they must do that by “harnessing” the constantly changing waves of change, of random noise, of improbability. I have commented on those ideas in the mentioned thread, at posts #5 and #8, and I have quoted at posts #11 and #12 a couple of interesting and pertinent papers, introducing the important concept of robustness: the ability to achieve reliable functional results in spite of random noise and disturbing variation.

In this OP, I would like to present in some detail a very interesting system that shows very well what we can understand, at present, of that kind of amazing systems.

The system I will discuss here is an old friend: it is the NF-kB system of transcription factors (nuclear factor kappa-light-chain-enhancer of activated B cells). We are speaking, therefore, of transcription regulation, a very complex topic that I have already discussed in some depth here:

I will remind here briefly that transcription regulation is the very complex process that allows cells to be completely different using the same genomic information: IOWs, each type of cell “reads” differently the genes in the common genome, and that allows the different types of cell differentiation and the different cell responses in the same cell type.

Transcription regulation relies on many different levels of control, that are summarized in the above quoted OP, but a key role is certainly played by Transcription Factors (TFs), proteins that bind DNA and act as activators or inhibitors of transcription at specific sites.

TFs are a fascinating class of proteins. There are a lot of them (1600 – 2000 in humans, almost 10% of all proteins), and they are usually medium sized proteins, about 500 AA long, containing at least one highly conserved domain, the DNA binding domain (DBD), and other, often less understood, functional components.

I quote again here a recent review about human TFs:

The Human Transcription Factors

The NK-kB system is a system of TFs. I have discussed it in some detail in the discussion following the Ubiquitin thread, but I will describe it in a more systematic way here.

In general, I will refer a lot to this very recent paper about it:

Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

The NF-kB system relies essentially on 5 different TFs (see Fig. 1 A in the paper):

  1. RelA  (551 AAs)
  2. RelB  (579 AAs)
  3. c-Rel  (619 AAs)
  4. p105/p50 (968 AAs)
  5. p100/p52  (900 AAs)

Those 5 TFs work forming dimers, homodimers or heterodimers, for a total of 15 possible compbinations, all of which have been found to work in the cell, even if some of them are much more common.

Then there are at least 4 inhibitor proteins, collectively called IkBs.

The mechanism is apparently simple enough. The dimers are inhibited by IkBs and therefore they remain in the cytoplasm in inactive form.

When an appropriate signal arrives to the cell and is received by a membrane receptor, the inhibitor (the IkB molecule) is phosphorylated and then ubiquinated and detached from the complex. This is done by a protein complex called IKK. The free dimer can then migrate to the nucleus and localize there, where it can act as a TF, binding DNA.

This is the canonical activation pathway, summarized in Fig. 1. There is also a non canonical activation pathway, that we will not discuss for the moment.


Mechanism of NF-κB action. In this figure, the NF-κB heterodimer consisting of Rel and p50 proteins is used as an example. While in an inactivated state, NF-κB is located in the cytosol complexed with the inhibitory protein IκBα. Through the intermediacy of integral membrane receptors, a variety of extracellular signals can activate the enzyme IκB kinase (IKK). IKK, in turn, phosphorylates the IκBα protein, which results in ubiquitination, dissociation of IκBα from NF-κB, and eventual degradation of IκBα by the proteasome. The activated NF-κB is then translocated into the nucleus where it binds to specific sequences of DNA called response elements (RE). The DNA/NF-κB complex then recruits other proteins such as coactivators and RNA polymerase, which transcribe downstream DNA into mRNA. In turn, mRNA is translated into protein, resulting in a change of cell function.

Attribution: Boghog2 at English Wikipedia [Public domain]

Now, the purpose of this OP is to show, in greater detail, how this mechanism, apparently moderately simple, is indeed extremely complex and dynamic. Let’s see.

The stimuli.

First of all, we must understand what are the stimuli that, arriving to the cell membrane, are capable to activate the NF-kB system. IOWs, what are the signals that work as inputs.

The main concept is: the NF-kB system is a central pathway activated by many stimuli:

  1. Inflammation
  2. Stress
  3. Free radicals
  4. Infections
  5. Radiation
  6. Immune stimulation

IOWs, a wide variety of aggressive stimuli can activate the system

The extracellular signal arrives to the cell usually through specific cytokines, for example TNF, IL1, or through pathogen associated molecules, like bacterial lipopolysaccharides (LPS). Of course there are different and specific membrane receptors, in particular IL-1R (for IL1) , TNF-R (for TNF), and many TLRs (Toll like receptors, for pathogen associated structures). A special kind of activation is implemented, in B and T lymphocytes, by the immune activation of the specific receptors for antigen epitopes (B cell receptor, BCR, and T cell receptor, TCR).

The process through which the activated receptor can activate the NF-kB dimer is rather complex: it involves, in the canonical pathway, a macromolecular complex called IKK (IkB kinase) complex, comprising two catalytic kinase subunits (IKKa and IKKb) and a regulatory protein (IKKg/NEMO), and involving in multiple and complex ways the ubiquitin system. The non canonical pathway is a variation of that. Finally, a specific protein complex (CBM complex or CBM signalosome) mediates the transmission from the immune BCR or TCR to the canonical pathway. See Fig. 2:

From: NF-κB Activation in Lymphoid Malignancies: Genetics, Signaling, and Targeted Therapy – Scientific Figure on ResearchGate. Available from: https://www.researchgate.net/figure/Increased-activity-of-the-CARMA1-BCL10-MALT1-signalosome-drives-constitutive-NF-kB_fig2_324089636 [accessed 10 Jul, 2019]
Figure 3 – NF-κB Activation in Lymphoid Malignancies: Genetics, Signaling, and Targeted Therapy
available via license: Creative Commons Attribution 4.0 International

I will not go into further details about this part, but those interested can have a look at this very good paper:

TLR-4, IL-1R and TNF-R signaling to NF-kB: variations on a common theme

In particular, Figg. 1, 2, 3.

In the end, as a result of the activation process, the IkB inhibitor is degraded by the ubiquitin system, and the NK-kB dimer is free to migrate to the nucleus.

An important concept is that this is a “rapid-acting” response system, because the dimers are already present, in inactive form, in the cytoplasm, and must not be synthesized de novo: so the system is ready to respond to the activating signal.

The response.

But what is the cellular response?

Again, there are multiple and complex possible responses.

Essentially, this system is a major regulator of innate and adaptive immune responses. As such, it has a central role in the regulation of inflammation, in immunity, in autoimmune processes, and in cancer.

Moreover, the NF-kB system is rather ubiquitous, and is present and active in many different cell types. And, as we have seen, it can be activated by different stimuli, in different ways.

So, the important point is that the response to activation must be (at least):

  1. Lineage-specific
  2. Stimulus-specific

IOWs, different cells must be able to respond differently, and each cell type must respond differently to different stimuli. That gives a wide range of possible gene expression patterns at the transcription level.

The following paper is a good review of the topic:

Selectivity of the NF-κB Response

For example, IL2 is induced by NF-kB activayion in T cells, but not in B cells (lineage specific response). Moreover, specific cell types can undergo specific, and often different, cell destinies after NF-kB activation: for example, NK-kB is strongly involved in the control and regulation of T and B cell development.

From:

30 years of NF-κB: a blossoming of relevance to human pathobiology

“B and T lymphocytes induce NF-κB in adaptive immune responses through the CARD11:Bcl10:MALT1 (CBM) complex (Hayden and Ghosh, 2008). Newly expressed genes promote lymphocyte proliferation and specific immune functions including antibody production by B cells and the generation of cytokines and other anti-pathogen responses by T cells.”

And, in the same cell type, certain promoters regulated by NF-kB require additional signaling (for example,  in human dendritic cells promoters for Il6Il12b, and MCP-1 require additional p38 histone phosphorylation to be activated), while others can be activated directly (stimulus-specific response).

So, to sum up:

  1. A variety of stimuli can activate the system in different ways
  2. The system itself has its complexity (different dimers)
  3. The response can be widely different, according to the cell type where it happens, and to the type of stimuli that have activated the system, and probably according to other complex variables.
  4. The possible responses include a wide range of regulations of inflammation, of the immune system, of cell specifications or modifications, and so on.

How does it work?

So, what do we know about the working of such a system?

I will ignore, for the moment, the many complexities of the activation pathways, both canonical and non canonical, the role of cyotkines and receptors and IKK complexes, the many facets of NEMO and of the involvement of the ubiquitin system.

For simplicity, we will start with the activated system: the IkB inhibitor has been released from the inactive complex in the cytoplasm, and some form of NF-kB dimer is ready to migrate to the nucleus.

Let’s remember that the purpose of this OP is to show that the system works as a dynamic, far from equilibrium system, rather than as a “traditional” machine. And that such a way to work is an even more amazing example of design and functional complexity.

To do that; I will rely mainly on the recent paper quoted at the beginning:

Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

The paper is essentially about the NF-kB Target Selection Puzzle. IOWs, it tries to analyze what we know about the specificity of the response. How are specific patterns of transcription achieved after the activation of the system? What mechanisms allow the selection of the right genes to be transcribed (the targets) to implement the specific patterns according to cell type, context, and type of stimuli?

A “traditional” view of the system as a machine would try to establish rather fixed connections. For example, some type of dimer is connected to specific stimuli, and evokes specific gene patterns. Or some other components modulate the effect of NK-kB, generate diversification and specificity of the response.

Well, those ideas are not completely wrong. In a sense, the system does work also that way. Dimer specificity has a role. Other components have a role. In a sense, but only in a sense, the system works as though it were a traditional machine, and uses some of the mechanisms that we find in the concept of a traditional biological machine.

But that is only a tiny part of the real thing.

The real thing is that the system really works as a dynamic, far from equilibrium system, harnessing huge random/stochastic components to achieve robustness and complexity and flexibility of behavior in spite of all those non finalistic parts.

Let’s see how that happens, at least for the limited understanding we have of it. It is important to consider that this is a system that has been studied a lot, for decades, because of its central role in so many physiological and pathological contexts, and so we know many things. But still, our understanding is very limited, as you will see.

So, let’s go back to the paper. I will try to summarize as simply as possible the main concepts. Anyone who is really interested can refer to the paper itself.

Essentially, the paper analyzes three important and different aspects that contribute to the selection of targets at the genomic level by our TFs (IOWs, our NF-kB dimers, ready to migrate to the nucleus. As the title itself summarizes, they are:

  1. Abundance
  2. Affinity
  3. Binding site availability

1. Abundance

Abundance is referred here to two different variables: abundance of NF-kB Binding Sites in the genome and abundance of Nucleus-Localized NF-kB Dimers. Let’s consider them separately.

1a) Abundance of NF-kB Binding Sites in the genome:

It is well known that TFs bind specific sites in the genome. For NF-kB TFs, the following consensus kB site pattern has been found:

 5′-GGGRNWYYCC-3′

where R, W, Y, and N, respectively denote purine, adenine or thymine, pyrimidine, and any nucleotide.

That simply means that any sequence corresponding to that pattern in the genome can, in principle, bind NF-kB dimers.

So the problem is: how many such sequences do exist in the human genome?

Well, a study based on RelA has evaluated about 10^4 consensus sequences in the whole genome, but as NF-kB dimers seem to bind even incomplete consensus sites, the total number of potential binding sites could be nearer to 10^6

1b) Abundance of Nucleus-Localized NF-kB Dimers:

An estimate of the abundance of dimers in the nucleus after activation of the system is that about 1.5 × 10^5 molecules can be found, but again that is derived from studies about RelA only. Moreover, the number of molecules and type of dimer can probably vary much according to cell type.

So, the crucial variable, that is the ratio between binding sites and available dimers, and which could help undertsand the rate of sites saturation in the nucleus, remains rather undecided, and it seems very likely that it can vary a lot in different circumstances.

But there is another very interesting aspect about the concentration of dimers in the nucleus. According to some studies, NF-kB seems to generate oscillations of its nuclear content in some cell types, and those oscillation can be a way to generate specific transcription patterns:

NF-kB oscillations translate into functionally related patterns of gene expression

For example, this very recent paper :

NF-κB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration

shows at Fig. 3 the occupancy curve of binding sites at nuclear level after NF-kB activation in two different cell types.

In fibroblasts, the curve is a periodic oscillation, with a frequency that varies according to various factors, and translates into different transcription scenarios accordingly:

Gene expression dynamics scale with the period (g1) and amplitude (g2) of these oscillations, which are influenced by variables such as signal strength, duration, and receptor identity.


In macrophages, instead, the curve is rather:

a single, strong nuclear translocation event which persists for as long as the stimulus remains and tends to remain above baseline for an extended period of time.

In this case, the type of transcription will be probably regulated by the are under the curve, ratehr than by the period and amplitude of the oscialltions, as happened in fibroblasts.

Interestingly, while in previous studies it seemed that the concentration of nuclear dimers could be sufficient to saturate most or all binding sites, that has been found not to be the case in more recent studies. Again from the paper about abundance:

in fact, this lack of saturation of the system is necessary to generate stimulus- and cell-type specific gene expression profiles

Moreover, the binding itself seems to be rather short-lived:

Interestingly, it is now thought that most functional NF-kB interactions with chromatin—interactions that lead to a change in transcription—are fleeting… a subsequent study using FRAP in live cells expressing RelA-GFP showed that most RelA-DNA interactions are actually quite dynamic, with half-lives of a few seconds… Indeed, a recent study used single-molecule tracking of individual Halo-tagged RelA molecules in live cells to show that the majority (∼96%) of RelA undergoes short-lived interactions lasting on average ∼0.5 s, while just ∼4% of RelA molecules form more stable complexes with a lifetime of ∼4 s.

2. Affinity

Affinity of dimers for DNA sequences is not a clear cut matter. From the paper:

Biochemical DNA binding studies of a wide variety of 9–12 base-pair sequences have revealed that different NF-kB dimers bind far more sequences than previously thought, with different dimer species exhibiting specific but overlapping affinities for consensus and non-consensus kB site sequences.

IOWs, we have different dimers (15 different types) binding with varying affinity different DNA sequences (starting from the classical consensus sequence, but including also incomplete sequences). Remember that those sequences are rather short (the consensus sequence is 10 nucleotides long), and that there are thousands of such sequences in the genome.

Moreover, different bindings can affect transcription differently. Again, from the paper:

How might different consensus kB sites modulate the activity of the NF-kB dimers? Structure-function studies have shown that binding to different consensus kB sites can alter the conformation of the bound NF-kB dimers, thus dictating dimer function When an NF-kB dimer interacts with a DNA sequence, side chains of the amino  acids located in the DNA-binding domains of dimers contact the bases exposed in the groove of the DNA. For different consensus kB site sequences different bases are exposed in this groove, and NF-kB seems to alter its conformation to maximize interactions with the DNA and maintain high binding affinity. Changes in conformation may in turn impact NF-kB binding to co-regulators of transcription, whether these are activating or inhibitory, to specify the strength and dynamics of the transcriptional response. These findings again highlight how the huge array of kB binding site sequences must play a key role in modulating the transcription of target genes.

Quite a complex scenario, I would say!

But there is more:

Finally, as an additional layer of dimer and sequence-specific regulation, each of the subunits can be phosphorylated at multiple sites with, depending on the site, effects on nearly every step of NF-kB activation.

IOWs, the 15 dimers we have mentioned can be phosphorylated in many different ways, and that changes their binding affinities and their effects on transcription.

This section of the paper ends with a very interesting statement:

Overall, when considering the various ways in which NF-kB dimer abundances and their affinity for DNA can be modulated, it becomes clear that with these multiple cascading effects, small differences in consensus kB site sequences and small a priori differences in interaction affinities can ultimately have a large impact on the transcriptional response to NF-kB pathway activation.

Emphasis mine.

This is interesting, because in some way it seems to suggest that the whole system acts like a chaotic system, at least at some basic level. IOWs, small initial differences, maybe even random noise, can potentially affect deeply the general working of the whole systems.

Unless, of course, there is some higher, powerful level of control.

3. Availability of high affinity kB binding sequences

We have seen that there is a great abundance and variety of binding sequences for NF-kB dimers in the human genome. But, of course, those sequences are not necessarily available. Different cell types will have a different scenario of binding sites availability.

Why?

Because, as we know, the genome and chromatin are a very dynamic system, that can exist in many different states, continuosly changing in different cell types and, in the same cell type, in different conditions..

We know rather well the many levels of control that affect DNA and chromatin state. In brief, they are essentially:

  1. DNA methylation
  2. Histone modifications (methylation, acetylation, etc)
  3. Chromatin modifications
  4. Higher levels of organization, including nuclear localization and TADs (Topologically Associating Domains)

For example, from the paper:

The promoter regions of early response genes have abundant histone acetylation or trimethylation prior to stimulation [e.g., H3K27ac, (67) and H4K20me3, (66)], a chromatin state “poised” for immediate activation…  In contrast, promoters of late genes often have hypo-acetylated histones, requiring conformational changes to the chromatin to become accessible. They are therefore unable to recruit NF-kB for up to several hours after stimulation (68), due to the slow process of chromatin remodeling.

We must remember that each wave of NK-kB activation translates into the modified transcription of a lot of different genes at the genome level. It is therefore extremely important to consider what genes are available (IOWs, their promoters can be reached by the NF-kB signal) in each cell type and cell state.

The paper concludes:

Taken together, chromatin state and chromatin organization strongly influence the selection of DNA binding sites by NF-kB dimers and, most likely, the selection of the target genes that are regulated by these protein-DNA interaction events. Analyses that consider binding events in the context of three-dimensional nuclear organization and chromatin composition will be required to generate a more accurate view of the ways in which NF-kBDNA binding affects gene transcription.

This is the main scenario. But there are other components, that I have not considered in detail for the sake of brevity, for example competition between NF-kB dimers and the complex role and intervention of other co-regulators of transcription.

Does the system work?

But does the system work?

Of course it does. It is a central regulator, as we have said, of many extremely important biological processes, above all immunity. This is the system that decides how immune cells, T and B lymphocytes, have to behave, in terms of cell destiny and cell state. It is of huge relevance in all inflammatory responses, and in our defense against infections. It works, it works very well.

And what happens if it does not work properly?

Of course, like all very complex systems, errors can happen. Those interested can have a look at this recent paper:

30 years of NF-κB: a blossoming of relevance to human pathobiology

First of all, many serious genetic diseases have been linked to mutations in genes involved in the system. You can find a list in Table 1 of the above paper. Among them, for example, some forms of SCID, Severe combined immunodeficiency, one of the most severe genetic diseases of the immune system.

But, of course, a dysfunction of the NF-kB system has a very important role also in autoimmune diseases and in cancer.

Conclusions.

So, let’s try to sum up what we have seen here in the light of the original statement about biological systems that “are not machines”.

The NF-kB system is a perfect example. Even if we still understand very little of how it works, it is rather obvious that it is not a traditional machine.

A traditional machine would work differently. The signal would be transmitted from the membrane to the nucleus in the simplest possible way, without ambiguities and diversions. The Transcription Factor, once activated, would bind, at the level of the genome, very specific sites, each of them corresponding to a definite cascade of specific genes. The result would be clear cut, almost mechanical. Like a watch.

But that’s not the way things happen. There are myriads of variations, of ambiguities, of stochastic components.

The signal arrives to the membrane in multiple ways, very different one from the other: IL1, IL17, TNF, bacterial LPS, and immune activation of the B cell receptor (BCR) or the T cell receptor (TCR) are all possible signals.

The signal is translated to the NF-kB proteins in very different ways: canonical or non canonical activation, involving complex protein structures such as:

The CBM signalosome, intermediate between immune activation of BCR or TCR and canonical activation of the NF-kB. This complex is made of at least three proteins, CARD11, Bcl10 and MALT1.

The IKK complex in canonical activation: this is made of three proteins, IKK alpha, IKK beta, and NEMO. Its purpose is to phosphorylate the IkB, the inhibitor of the dimers, so that it can be ubiquinated and released from the dimer. Then the dimer can relocate to the nucleus.

Non canonical pathway: it involves the following phosphorylation cascade: NIK -> IKK alpha dimer -> Relb – p100 dimer -> Relb – p50 dimer (the final TF). It operates during the development of lymphoid organs and is responsible for the generation of B and T lymphocytes.

Different kinds of activated dimers relocate to the nucleus.

Different dimers, in varying abundance, interact with many different binding sites: complete or incomplete consensus sites, and probably others. The interaction is usually brief, and it can generate an oscillating pattern, or a more stable pattern

Completely different sets of genes are transcribed in different cell types and in different contexts, because of the interaction of NF-kB TFs with their promoters.

Many other factors and systems contribute to the final result.

The chromatin state of the cell at the moment of the NF-kB activation is essential to determine the accessibility of different binding sites, and therefore the final transcription pattern.

All these events and interactions are quick, unstable, far from equilibrium. A lot of possible random noise is involved.

In spite of that amazing complexity and potential stochastic nature of the system, reliable transcripion regulation and results are obtained in most cases. Those results are essential to immune cell differentiation, immune response, both innate and adaptive, inflammation, apoptosis, and many other crucial cellular processes.

So, let’s go back to our initial question.

Is this the working of a machine?

Of course it is! Because the results are purposeful, reasonably robust and reliable, and govern a lot of complex processes with remarkable elegance and efficiency.

But certainly, it is not a traditional machine. It is a lot more complex. It is a lot more beautiful and flexible.

It works with biological realities and not with transistors and switches. And biological realities are, by definition, far from equilibrium states, improbable forms of order that must continuously recreate themselves, fighting against the thermodynamic disorder and the intrinsic random noise that should apparently dominate any such scenario.

It is more similar to a set of extremely clever surfers who succeed in performing elegant and functional figures and motions in spite of the huge contrasting waves.

It is, from all points of view, amazing.

Now, Paley was absolutely right. No traditional machine, like a watch, could ever originate without design.

And if that is true of a watch, with its rather simple and fixed mechanisms, how much truer it must be for a system like NF-kB? Or, for that, like any cellular complex system?

Do you still have any doubts?

Added graphic: The evolutionary history, in terms of human conserved information, of the three proteins in the CBM signalosome.
On the y axis, homologies with the human protein as bits per aminoacid (bpa). On the x axis, approximate time of appearance in million of years.
The graphic shows the big information jump in vertebrates for all three protens , especially CARD11.

206 Replies to “Controlling the waves of dynamic, far from equilibrium states: the NF-kB system of transcription regulation.

  1. 1
    Eugene says:

    My biggest concern is not even about the evolution vs. ID. It is about the technology used for the machinery of life being orders of magnitude more complex than what our brains seem to be capable of understanding or analyzing. In other words, we’re already way more complex than any machinery we can realistically hope to create. And we already exist (or being simulated, doesn’t matter). What purpose do we serve then to whoever is in possession of the technology we’re made with?

  2. 2
    gpuccio says:

    Eugene:

    Thank you for the comment.

    the technology used for the machinery of life being orders of magnitude more complex than what our brains seem to be capable of understanding or analyzing.

    Yes, that’s exactly the point I was trying to make.

    What purpose do we serve then to whoever is in possession of the technology we’re made with?

    Well, that’s certainly a much bigger question, And, under many aspects, a philosophical one.

    However, we can certainly try to get some clues from the design as we see it. For example, I have said very often that the main driving purpose of biological design, far from being mere survival and fitness, as neo-darwinists believe, seems to be the desire to express ever growingly complex life and, through life, ever growingly complex functions.

    It should be rather obvious that, if the true purpose of biological beings were to achieve the highest survival and fitness, as neo-darwinists beòlieve, life should have easily stopped at prokaryotes.

  3. 3
    gpuccio says:

    To all:

    Two of the papers I quote in the OP:

    Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle
    https://www.frontiersin.org/articles/10.3389/fimmu.2019.00705/full

    and:

    NF-kB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration
    https://www.frontiersin.org/articles/10.3389/fimmu.2019.00705/full

    are really part of a research topic:

    Understanding Immunobiology Through The Specificity of NF-kB
    https://www.frontiersin.org/research-topics/7955/understanding-immunobiology-through-the-specificity-of-nf-b#articles

    including 8 very interesting and very recent papers about NF-kB, at Frontiers in Immunology.

    Here are the titles:

    Signaling Crosstalk Mechanisms That May Fine-Tune Pathogen-Responsive NF-kB

    An NF-kB Activity Calculator to Delineate Signaling Crosstalk: Type I and II Interferons Enhance NF?B via Distinct Mechanisms

    Cellular Specificity of NF-kB Function in the Nervous System

    Immune Differentiation Regulator p100 Tunes NF-kB Responses to TNF

    Techniques for Studying Decoding of Single Cell Dynamics

    NF-kB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration (quoted in the OP)

    Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle (quoted in the OP)

    Signal Distortion: How Intracellular Pathogens Alter Host Cell Fate by Modulating NF-kB Dynamics

    You can access all of them from the linked page.

    Those papers, as a whole, certainly add a lot to the ideas I have expressed in the OP.

    I will have a look at all of them, and discuss here the most interesting things.

  4. 4
    PeterA says:

    Right from the start, GP graciously warns us (curious readers) to fasten our seat belts and get ready for a thrilling ride that should be filled with very insightful but provocative explanations (perhaps a little too technical for some folks):

    the cell implements the same functions as complex machines do, and much more.

    to do that, you need much greater functional complexity than you need to realize a conventional machine.

    dynamic, far from equilibrium systems that can be as successful as a conventional machine, or more, must certainly be incredibly complex and amazing systems, systems that defy everything else that we already know and that we can conceive. They must not only implement their functional purposes, but they must do that by “harnessing” the constantly changing waves of change, of random noise, of improbability. 

    Please, note that almost a year ago GP wrote this excellent article:
    Transcription Regulation: A Miracle Of Engineering
    (visited 3,545 times and commented 334 times)

    following another very interesting discussion started by PaV a month earlier:
    Chromatin Topology: The New (And Latest) Functional Complexity
    (visited 3,338 times and commented 241 times)

    Before this discussion goes further, I want to share my delight in seeing this excellent article here today and express my deep gratitude to GP for taking time to write it and for leading the discussion that I expect this fascinating (often mind boggling) topic should provoke.

  5. 5
    kairosfocus says:

    Another GP thought-treat! Yay!!!! KF

  6. 6
    jawa says:

    I second KF @5.
    It’s a pleasure to see a new OP by GP.
    However, as usual, it’s so dense that it requires some chewing before it can be digested, at least partially. 🙂
    Perhaps this time I see some loud anti-ID folks like the professors from Toronto and Kentucky will dare to present some valid arguments? However, I won’t hold my breath. 🙂

  7. 7
    OLV says:

    I agree with PeterA @ 4 and join Jawa @6 to second KF@5.

    However, before embarking in a careful reading of what GP has written, let me publicly confess here that I still don’t understand certain basic things associated with transcription:
    1. are there many DNA segments that can get transcribed by the RNA polymerase to a pre-mRNA that later can be spliced to form the mRNA that goes to translation?
    2. what mechanisms determine which of those multiple potential segments is transcribed at a given moment? Don’t they all have starting and ending points? Then why will the RNA-polymerase transcribe one segment and not another? Are the starting marks different for every DNA segment?
    3. is this an epigenetic issue or something else?
    Perhaps these (most probably dumb) questions have been answered many times in the literature I have read, but I still don’t quite get it. I would fail to answer those questions if I had to pass a test on this subject right now.
    Any help with this?
    Thanks.
    PS. the papers GP has linked in this OP are very interesting.

  8. 8
    gpuccio says:

    PeterA:

    Thank you. 🙂

    Indeed, the topic is fascinating. We really need to go beyond our conventional ideas about biology, armed by the powerful weapons of design inference and functional complexity.

  9. 9
    gpuccio says:

    KF:

    Thank you! 🙂

    Appreciate your enthusiasm! 🙂

  10. 10
    gpuccio says:

    Jawa:

    Thank you! 🙂

    I really hope there will be some interesting discussion.

  11. 11
    gpuccio says:

    OLV:

    Thank you! 🙂

    As you ask questions, here arer my answers:

    1. Essentially, all protein coding genes, about 20000 in the human genome.

    2. It requires the binding of general TFs at the promoter and the formation of the pre-initiation complex (which is the same for all genes), plus the binding of specific TFs at one or more enhancer sites, with specific modifications of the chromatin structures. At least.

    3. Yes. It is an epigenetic process.

  12. 12
    bornagain77 says:

    Did you see this recent paper Gp?, particularly this, “Even between closely related species there’s a non-negligible portion of TFs that are likely to bind new sequences,”?

    Dozens Of Genes Once Thought Widespread Are Unique To Humans – May 27, 2019
    Excerpt: Researchers at the Donnelly Centre in Toronto have found that dozens of genes, previously thought to have similar roles across different organisms, are in fact unique to humans and could help explain how our species came to exist. These genes code for a class of proteins known as transcription factors, or TFs, which control gene activity. TFs recognize specific snippets of the DNA code called motifs, and use them as landing sites to bind the DNA and turn genes on or off.,,,
    The findings reveal that some sub-classes of TFs are much more functionally diverse than previously thought.
    “Even between closely related species there’s a non-negligible portion of TFs that are likely to bind new sequences,” says Sam Lambert, former graduate student in Hughes’ lab who did most of the work on the paper and has since moved to the University of Cambridge for a postdoctoral stint.
    “This means they are likely to have novel functions by regulating different genes, which may be important for species differences,” he says.
    https://uncommondescent.com/human-evolution/dozens-of-genes-once-thought-widespread-are-unique-to-humans/

    paper

    Excerpt: Similarity regression inherently quantifies TF motif evolution, and shows that previous claims of near-complete conservation of motifs between human and Drosophila are inflated, with nearly half of the motifs in each species absent from the other, largely due to extensive divergence in C2H2 zinc finger proteins.
    https://www.nature.com/articles/s41588-019-0411-1

  13. 13
    gpuccio says:

    To all:

    Well, the first paper in the “reasearch topic” I mentioned at #3 is:
    Signaling Crosstalk Mechanisms That May Fine-Tune Pathogen-Responsive NF?B

    It immediately brings us back to an old and recurring concept:

    crosstalk

    Now, if there is one concept that screams design, that is certainly “crosstalk”.

    Because, to have crosstalk, you need at least two intelligent systems, each of them with its own “language”, interacting in intelligent ways. Or, of course, at least two intelligent people! 🙂

    This paper is about one specific aspect of the NF-kB system: transcription regulation in response to non specific stimuli from infecting agents, the so called innate immune response.

    You may remember from the OP that the specific receptors for bacterial or viral components (for example bacterial lipopolysaccharide , LPS) are called Toll like receptors (TLRs), and that their activation converges, through its own complex pathways, into the canonical pathway of activation of the NF-kB system.

    This is a generic way to respond to infections, and is called “innate immune response”, to distinguish it from the adaptive immune response, where T and B lymphocytes resognize specific patterns (epitopes) in specific antigens and react to them by a complex memory and amplification process. As we know, the NF-kB system has a very central role in adaptive immunity too, but it is completely different.

    But let’s go back to innate immunity. The response, in this case, is an inflammatory response. This response, of course, is more generic than the refined adaptive immune response, involving antibodies, killer cells and so on. However, even is simpler, the quality and quantity of the inflammatory response must be strictly fine tuned, because otherwise it becomes really dangerous for the tissues.

    This paragraph sums up the main concepts in the paper:

    To ensure effective host defense against pathogens and to maintain tissue integrity, immune cells must integrate multiple signals to produce appropriate responses (14). Cells of the innate immune system are equipped with pattern recognition-receptors (PRRs) that detect pathogen-derived molecules, such as lipopolysaccharides and dsRNA (3). Once activated, PRRs initiate series of intracellular biochemical events that converge on transcription factors that regulate powerful inflammatory gene expression programs (15). To tune inflammatory responses, pathways that do not trigger inflammatory responses themselves may modulate signal transduction from PRRs to transcription factors through crosstalk mechanisms (Figure 1). Crosstalk allows cells to shape the inflammatory response to the context of their microenvironment and history (16). Crosstalk between two signaling pathways may emerge due shared signaling components, direct interactions between pathway-specific components, and regulation of the expression level of a pathway-specific component by the other pathway (1, 17). Since toll-like receptors (TLRs) are the best characterized PRRs, they provide the most salient examples of crosstalk at the receptor module. Key determinants of tissue microenvironments are type I and II interferons (IFNs), which do not activate NF?B, but regulate NF?B-dependent gene expression (18–21). As such, this review focuses on the cross-regulation of the TLR-NF?B signaling axis by type I and II IFNs.

    So, a few interesting points:

    a) TLRs, already a rather complex class of receptors, are part of a wider class of receptors, the pattern recognition-receptors (PRRs). Complexity never stops!

    b) The interferon system is another, different system implied in innate immunity, especially in viral infections. We all know its importance. Interferons are a complex set of cytokines with its own complex set of receptors and responses.

    c) Howerver, the interferon system does not directly activate the NF-kB system. In a sense, they are two “parallel” signaling systems, both implied in innate immune responses.

    d) But, as the paper well outlines, there is a lot of “crosstalk” between the two systems. One interferes with the other at multiple levels. And that crosstalk is very important for a strict fine tuning of the innate immune response and of imflammatory processes.

    Interesting, isn’t it?

    I quote here the conclusions:

    Concluding Remarks
    Maintaining a delicate balance between effective host defense and deleterious inflammatory responses requires precise control of NF?B signaling (111). Multiple regulatory circuits have evolved to fine-tune NF?B-mediated inflammation through context-specific crosstalk (112). In this work, we have highlighted specific components of the NF?B signaling pathway for which crosstalk regulation is well-established. Despite decades of research, our current understanding of NF?B signaling remains insufficient to yield effective pharmacological targets (111, 113). Effective and specific pharmacological modulation of NF?B activity requires detailed, quantitative understanding of NF?B signaling dynamics (57). Furthermore, achieving cell-type and context-specific modulation of NF?B would be a panacea for many autoimmune and infectious diseases, as well as malignancies (112–114).

    To dissect the dynamic regulation of NF?B signaling, quantitative approaches with single-cell resolution are required (115). By measuring the full distribution of signaling dynamics and gene expression in single cells, rather than simple averages, one can decipher cell-intrinsic properties from tissue-intrinsic properties (116–118). Such single-cell analyses may reveal strategies for targeting pathological cell populations with high specificity, which can mitigate adverse effects of pharmacological therapy (57, 113). Furthermore, with the aid of mathematical and computational modeling, one can conduct experiments in silico that may be prohibitive in vitro or ex vivo (57, 119, 120).

    Finally, cross-regulatory pathways may fine-tune NF?B activity in a gene-specific manner. Many studies have identified the molecular components of gene-regulatory networks (GRNs) that control NF?B-dependent gene expression (15, 121). The regulatory mechanisms that define the topology of these GRNs include chromatin remodeling, transcription initiation and elongation, and post-transcriptional processing (15). They allow for combinatorial control by multiple factors and pathways, as well as cross-regulation (15). Further work will be required to delineate them in various physiological contexts.

    As usual, emphasis is mine.

    Please note the “have evolved” at the beginning, practically used by default instead of a simple “do exist” or “can be observed”. 🙂

  14. 14
    gpuccio says:

    Bornagain77:

    Yes, I have looked at that paper. Interesting.

    Indeed, divergence in TF sequences and motifs is certainly one of the main tools of specific transcription control in different organisms.

    A lot of superficial ideas about TFs is probably due to the rather strong conservation of known DNA binding domanis (DBDs). However, DBDs are only part of the story. The most interesting part of TF sequences is certainly to be found in the less conserved sequences and domains, even in intrinsically disordered sequences.

  15. 15
    gpuccio says:

    To all:

    This is a more general paper about oscillations in TF nuclear occupancy as a way to regulate transcription:

    Transcription factor oscillations in neural stem cells: Implications for accurate control of gene expression

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5345753/

    The abstract:

    Naturally occurring oscillations in glucocorticoids induce a cyclic activation of the glucocorticoid receptor (GR), a well-characterized ligand-activated transcription factor. These cycles of GR activation/deactivation result in rapid GR exchange at genomic response elements and GR recycling through the chaperone machinery, ultimately generating pulses of GR-mediated transcriptional activity of target genes. In a recent article we have discussed the implications of circadian and high-frequency (ultradian) glucocorticoid oscillations for the dynamic control of gene expression in hippocampal neural stem/progenitor cells (NSPCs) (Fitzsimons et al., Front. Neuroendocrinol., 2016). Interestingly, this oscillatory transcriptional activity is common to other transcription factors, many of which regulate key biological functions in NSPCs, such as NF-kB, p53, Wnt and Notch. Here, we discuss the oscillatory behavior of these transcription factors, their role in a biologically accurate target regulation and the potential importance for a dynamic control of transcription activity and gene expression in NSPCs.

    And here is the part about NF-kB:

    The NF-kB pathway is composed of a group of transcription factors that bind to form homo- or hetero-dimers. Once formed, these protein complexes control several cellular functions such as the response to stress and the regulation of growth, cell cycle, survival, apoptosis and differentiation in NSPCs.14-16 Oscillations in NF-kB were first observed in embryonic fibroblasts, this observation suggested that temporal control of NF-kB activation is coordinated by the sequential degradation and synthesis of inhibitor kappa B (IkB) proteins.3

    More recently, oscillations in the relative nuclear/cytosolic concentration of NF-kB transcription factors have been observed in single cells in vivo, indicating this may be an additional regulatory mechanism to control NF-kB-dependent transcriptional activity. Importantly, the frequency and amplitude of these oscillations changed in a cell-type dependent fashion and differentially affected the dynamics of gene expression,5 indicating that NF-kB transcription factors may use changes in the frequency and amplitude of their oscillatory dynamics to regulate the transcription of target genes.1,17 Thus, the NF-kB pathway provides a well-characterized example of how oscillatory transcription factor activity may encode additional, biologically relevant, information for an accurate control of gene expression.

    So, these “waves” of nuclear occupancy by TFs, regulating transcription according to their frequency/period and amplitude, seem to be a pattern that is not isolated at all. Maybe more important and common than we can at present imagine.

  16. 16
    Eugen says:

    We have classical, celestial and quantum mechanics but this article describes the process of what we should call chemical mechanics. Why not? 🙂

  17. 17
    OLV says:

    GP @11:

    (Regarding my questions @7)

    “It requires the binding of general TFs at the promoter and the formation of the pre-initiation complex (which is the same for all genes), plus the binding of specific TFs at one or more enhancer sites, with specific modifications of the chromatin structures. At least.”

    thanks for the explanation.

    Why “at least”? Could there be more?

    With the information you provided, I found this:

    Introduction to the Thematic Minireview Series: Chromatin and transcription

  18. 18
    gpuccio says:

    Eugen at #16:

    We have classical, celestial and quantum mechanics but this article describes the process of what we should call chemical mechanics. Why not?

    Yes, why not?

    Chemical mechanics? That is a brilliant way to put it! 🙂

  19. 19
    gpuccio says:

    OLV at #17:

    “Why “at least”? Could there be more?”

    Yes. There can always be more, in biology. Indeed, strangely, there always is more. 🙂

    By the way, nice mini-review about chromatin and transctiption you found! I will certainly read it with great attention.

  20. 20
    gpuccio says:

    To all:

    We have said that NF-kB is an ubiquitously expressed transcription factor. It really is!

    So, while its more understood functions are mainly related to the immune system and inflammation, it does implement competely different functions in other types of cells.

    This very interesting paper, which is part of the research topic quoted at #3, is about the increasing evidennces of the important role of the NK-kB system in the Central Nervous System:

    Cellular Specificity of NF-?B Function in the Nervous System

    https://www.frontiersin.org/articles/10.3389/fimmu.2019.01043/full

    And, again, it focuses on the cellular specificity of the NF-kB response.

    Here is the introduction:

    Nuclear Factor Kappa B (NF-kB) is a ubiquitously expressed transcription factor with key functions in a wide array of biological systems. While the role of NF-kB in processes, such as host immunity and oncogenesis has been more clearly defined, an understanding of the basic functions of NF-kB in the nervous system has lagged behind. The vast cell-type heterogeneity within the central nervous system (CNS) and the interplay between cell-type specific roles of NF-kB contributes to the complexity of understanding NF-kB functions in the brain. In this review, we will focus on the emerging understanding of cell-autonomous regulation of NF-?B signaling as well as the non-cell-autonomous functional impacts of NF-?B activation in the mammalian nervous system. We will focus on recent work which is unlocking the pleiotropic roles of NF-kB in neurons and glial cells (including astrocytes and microglia). Normal physiology as well as disorders of the CNS in which NF-kB signaling has been implicated will be discussed with reference to the lens of cell-type specific responses.

    Table 1 in the paper lists the following functions for NF-kB in neurons:

    -Synaptic plasticity
    -Learning and memory
    -Synapse to nuclear communication
    -Developmental growth and survival in response to trophic cues

    And, for glia:

    -Immune response
    -Injury response
    -Glutamate clearance
    -Central control of metabolism

    As can be seen, while the roles in glia cells are more similar to what we would expect from the more common roles in the immune system, the roles in neurons are much more specific and refined.

    The Table also mentions the following:

    “The pleiotropic functions of the NF-kB signaling pathway coupled with the cellular diversity of the nervous system mean that this table reflects generalizations, while more specific details are in the text of this review.”

    So, while I certainly invite all interested to look at the “more specific details”, I am really left with the strange feeling that, for the same reasons mentioned there (pleiotropic functions, cellular diversity, and probably many other things), everything we know about the NF-kB system, and probably all similar biological systems, really “reflects generalizations”.

    And that should really give us a deep sense of awe.

  21. 21
    gpuccio says:

    To all:

    This paper deals in more detail with the role of NF-kB system in synaptic plasticity, memory and learning:

    Transcriptional Control of Synaptic Plasticity by Transcription Factor NF-kB

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4736603/

    Abstract
    Activation of nuclear factor kappa B (NF-kB) transcription factors is required for the induction of synaptic plasticity and memory formation. All components of this signaling pathway are localized at synapses, and transcriptionally active NF-kB dimers move to the nucleus to translate synaptic signals into altered gene expression. Neuron-specific inhibition results in altered connectivity of excitatory and inhibitory synapses and functionally in selective learning deficits. Recent research on transgenic mice with impaired or hyperactivated NF-kB gave important insights into plasticity-related target gene expression that is regulated by NF-kB. In this minireview, we update the available data on the role of this transcription factor for learning and memory formation and comment on cross-sectional activation of NF-kB in the aged and diseased brain that may directly or indirectly affect kB-dependent transcription of synaptic genes.

    1. Introduction
    Acquisition and consolidation of new information by neuronal networks often referred to as learning and memory formation depend on the instant alterations of electrophysiological parameters of synaptic connections (long-term potentiation, long-term depression), on the generation of new neurons (neuroneogenesis), on the outgrowth of axons and dendrites (neuritogenesis), and on the formation/remodulation of dendritic spines (synaptogenesis). The transmission of active synapses becomes potentiated by additional opening of calcium channels and incorporation of preexisting channel proteins, that is, during the induction of long-term potentiation. In contrast, long-term structural reorganization of the neuronal network depends on the induction of specific gene expression programs [1]. The transcription factor NF-kB has been shown to be involved in all of the aforementioned processes of learning-associated neuronal plasticity, that is, long-term potentiation, neuroneogenesis, neuritogenesis, and synaptogenesis (for review, see [2]).

    A few concepts:

    a) All NF-kB Pathway Proteins Are Present at the Synapse.

    b) NF-kB Becomes Activated at Active Synapses

    c) NF-kB Induces Expression of Target Genes for Synaptic Plasticity

    d) Activation of NF-kB Is Required for Learning and Memory Formation

  22. 22
    jawa says:

    Can’t understand why the anti-ID folks allow GP to discredit neo-Darwinism so boldly in his OPs and commentaries. Are there objectors left out there? Have they missed GP’s arguments?
    Where are professors Larry Moran, Art Hunter, and other distinguished academic personalities that openly oppose ID?
    Did they give up? Do they lack solid arguments to debate GP?
    Are they afraid of experiencing public embarrassment?

  23. 23
    jawa says:

    Sorry, someone called my attention to my misspelling of UKY Professor Art Hunt’s name in my previous post. Mea culpa. 🙁

    I was referring to this distinguished professor who has posted interesting comments here before:

    https://pss.ca.uky.edu/person/arthur-hunt
    http://www.uky.edu/~aghunt00/agh.html

    It would be interesting to have him back here debating GP.

  24. 24
    Silver Asiatic says:

    jawa

    Can’t understand why the anti-ID folks allow GP to discredit neo-Darwinism so boldly in his OPs and commentaries.

    Discrediting Neo-Darwinism is one phase that we go through. Probably there is enough dissent within evolutionary science that they will back off from the more extreme proclamations of the greatness of Darwin. Mainstream science mags are openly saying things like “it overturns Darwinian ideas”. They don’t mind the idea of revolution. They’re building a defense for the next phase. It won’t be Neo-Darwinism but a collection of ad hoc observations and speculations. They explain that things happen. Self-organizing chemical determination caused it. They don’t need mutations or selection. Any mindless actions will do. It’s not about Darwin, and it’s not even about evolution. It’s not even about science. It’s all just a program to explain the world according to a pre-existing belief system. Even materialism is expendable when it is shown to be ridiculous. They will sell-out and jettison all previous claims and everything they use and just grab another (that’s how science works, we hear) – it’s all about protecting their inner belief. That’s the one thing that drives all of it. We know what that inner belief is, and ID is an attempt to chip away at it from the edges – indirectly and carefully, using their own terminology and doctrines. We’ve done well.
    But defeating Darwin is only a small part. Behe has been doing it for years and they’ll eventually accept his findings. The evolution story line will just adjust itself.
    Proving that there is actually Intelligent Design is much more difficult and without a knock-down argument, our best efforts remain ignored.

  25. 25
    gpuccio says:

    Jawa at #22:

    Frankly, I don’t think they are interested in my arguments. They are probably too bad!

  26. 26
    gpuccio says:

    Jawa and others:

    Or maybe they don’t believe that there is anything in my arguments tha really favours design. Some have made that objection in the past, I believe. good arguments, but what have they to do with design?

    Well. I believe that they have a lot to do with design.

    What do you think? Do my arguments in this OP, about harnessing stochastic change to get strict funtion, favour the design hypothesis? Or are they perfectly compatible with a neo-darwinian view of reality?

    Just to know…

  27. 27
    gpuccio says:

    Jawa at #23:

    Of course Arthur Hunt would be very welcome here. Indeed, any competent defender of the neo-darwinian paradigm would be very welcome here.

  28. 28
    gpuccio says:

    Silver Asiatic at #24:

    I think that the amazing complexity of newtork functional configurations in these complex regulation systems is direct evidence of intelligence and purpose. It is, of course, also an obvious falsification of the neo-darwinist paradigm, which cannot even start to try to explain that kind of facts.

    You are right that post-post-neo-darwinists are trying as well as they can to build new and more fashionable religions, such as self-organization, emerging properties, magical stochastic systems, and any other intangible, imaginary principle that is supposed to help.

    But believe me, that will not do. That simply does not work.

    When really pressured, they always go back to the old good fairy tale: RV + NS. In the end, it’s the only lie that retains some superficial credibility. The only game in town.

    Except, of course, design. 🙂

  29. 29
    gpuccio says:

    To all:

    This is interesting:

    Single-molecule dynamics and genome-wide transcriptomics reveal that NF-kB (p65)-DNA binding times can be decoupled from transcriptional activation

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6353211/

    Abstract
    Transcription factors (TFs) regulate gene expression in both prokaryotes and eukaryotes by recognizing and binding to specific DNA promoter sequences. In higher eukaryotes, it remains unclear how the duration of TF binding to DNA relates to downstream transcriptional output. Here, we address this question for the transcriptional activator NF-?B (p65), by live-cell single molecule imaging of TF-DNA binding kinetics and genome-wide quantification of p65-mediated transcription. We used mutants of p65, perturbing either the DNA binding domain (DBD) or the protein-protein transactivation domain (TAD). We found that p65-DNA binding time was predominantly determined by its DBD and directly correlated with its transcriptional output as long as the TAD is intact. Surprisingly, mutation or deletion of the TAD did not modify p65-DNA binding stability, suggesting that the p65 TAD generally contributes neither to the assembly of an “enhanceosome,” nor to the active removal of p65 from putative specific binding sites. However, TAD removal did reduce p65-mediated transcriptional activation, indicating that protein-protein interactions act to translate the long-lived p65-DNA binding into productive transcription.

    Now, let’s try to understand what this means.

    First of all, just to avoid confusion, p65 is just another name for RelA, the most common among the 5 proteins that contribute to NF-kB dimers. The paper here studied the behavour of the p65(RelA)-p50 dimer, with special focus on the RelA interaction with DNA.

    Now, we know that RelA, like all TFs, has a DNA binding domain (DBD) which binds specific DNA sites. We also know that the DBD is usually strongly conserved, and is supposed to be the most functional part in the TF.

    The paper here shows, in brief, that the DBD is really responsible for the DNA binding and for its stability (the duration of the binding), and the duration is connected to transcription. However, it is not the DBD itself that works on transcription, but rather the two protein-protein transactivation domains (TADs). While DNA binding is necessary to activate transcription, mere DNA binding does not work: mutations in the TADs will reduce transcription, even if the DNA binding remains stable. IOWs, it’s the TADs that really affect transcription, even if the DBD is necessary.

    OK, why is that interesting?

    Let’s see. The DBD is located, in the RelA molecule, in the first 300 AAs (the human protein is 551 AAs long). The two TADs are located, instead, in the last part of the molecule, more or less the last 100 – 200 AAs.

    So, I have blasted the human protein against our old friends, cartilaginous fishes.

    Is the protein conserved across our usual 400+ million years?

    The answer is the same as for most TFs: moderately so. In Rhincodon typus, we have about 404 bits of homology, less than 1 bit per aminoacid (bpa). Enough, but not too much.

    But is it true that the DBD is highly conserved?

    It certainly is. The 404 bits of homology, indeed, are completely contained in the first 300 AAs or so. IOWs, the homology is practically completely due to the DBD.

    So yes, the DBD is highly conserved.

    The rest of the sequence, not at all.

    In particular, the last 100 – 200 AAs at the C terminal, where the TAD domains are localized, show almost no homology bewteen humans and cartilaginous fishes.

    But… we know that those TAD domains are essential for the function. It’s them that really activate the transcription cascade. We can have no doubt about that!

    And so?

    So, this is a clear example of a concept that I have tried to defend many times here.

    There is function which remains the same through natural history. Therefore, the corresponding sequences are highly conserved.

    And there is function which changes. Which must change from species to species. Which is more specific to the individual species.

    That second type of function is not highly conserved at sequence level. Not because it is less essential, but because it is different in different species, and therefore has to change to remain functional.

    So, in RelA we can distinguish (at least) two different functions:

    a) The DNA binding: this function is implemented by the DBD (firts 300 AAs). It happens very much in the same way in humans and cartilaginous fishes, and thereofre the corresponding sequences remain highly homologous after 400+ years of evolutionary separation.

    b) The protein-protein interaction which really actovates the specific transcription: this function is implemented by the TADs (last 200 AAs). It is completely different in cartilaginous fishes and humans, because probably different genes are activated by the same signal, and therefore the corresponding sequence is not conserved.

    But it is highly functional just the same. In different ways, in the two different species.

    IOWs, my measure of functional information based on conserved homologies through long evolutionary times does measure functional information, but usually underestimates it. For example, in this case the value of 404 bits would measure only the conserved function in the DBD, but it would miss completely the undeniable functional information in the TAD domains, because that information, while certainly present, is not conserved among species.

    This is, IMO, a very important point.

  30. 30
    Silver Asiatic says:

    GP
    Agreed. You’ve done a great job to expose the reality of those systems. The functional relationships are indication of purpose and design, yes. I think what happens also is that evolutionists find some safety in the complexity that you reveal. They assume that nobody will actually go that far “down into the weeds” so they can always claim there’s something going on that is far too sophisticated for the average IDist to understand. So, they hide in the details.
    You’ve called their bluff and show what is really going on, and it is inexplicable from their mechanisms. They look for an escape but there is none. I agree also that it’s not merely a defeat of RM + NS that is indicated, but evidence of design in the actual operation of complex systems.
    Another tactic we see is that an extremely minor point is attacked and they attempt to show that it could have resulted from a mutation or HGT or drift. If they can make it half-way plausible then their entire claim will stand unrefuted, supposedly.
    It’s a game of hide-and-seek, whack-a-mole. We have to deal with 50 years of story-telling that just continued to build one assumption upon another, without any evidence, and having gained unquestioning support from academia simply on the idea that “evolution is right and every educated and intelligent person believes in it”. But even in papers citing evolution they never (or rarely) give the probabilistic outlooks on how it could have happened.

  31. 31
    Silver Asiatic says:

    GP

    What do you think? Do my arguments in this OP, about harnessing stochastic change to get strict funtion, favour the design hypothesis? Or are they perfectly compatible with a neo-darwinian view of reality?

    I think you did a great job, but just a thought …

    You responded to the notion that supported our view – the researcher says that the cell is not merely engineering but is more dynamic. So, we support that and you showed that the cell is far more than a machine.

    However, in supporting that researcher’s view, has the discussion changed?

    In this case, the researcher is actually saying that deterministic processes cannot explain these cellular functions. He says it’s all about self-organization, etc.

    Now, what you have done is amplified his statement very wonderfully. However …
    What remains open are a few things:
    1. Why didn’t the researcher, stating what you (and we) would and did – just conclude Design?
    2. The researcher is attacking Darwinism (subtly) accepting some of it:

    This familiar understanding grounds the conviction that a cell’s organization can be explained reductionistically, as well as the idea that its molecular pathways can be construed as deterministic circuits. The machine conception of the cell owes a great deal of its success to the methods traditionally used in molecular biology. However, the recent introduction of novel experimental techniques capable of tracking individual molecules within cells in real time is leading to the rapid accumulation of data that are inconsistent with an engineering view of the cell.

    … so, hasn’t he already conceded the game to us on that point?

    Could we now show how self-organization is not a strong enough answer for this type of system?

    I believe we could simply use Nicholson’s paper to discredit Darwinism (as he does himself), and our amplification of his work does “favor a design view”. But we don’t have enough data on how he (and others) believe self-organization really works as a substitute for Darwinian mechanisms, and that weakens support for Design in some ways.

  32. 32
    gpuccio says:

    Bornagain77 at #12:

    I believe that my comment at #29 is strictly connected to you observations. It also expands, with a real example, the simple ideas I had already expressed at #14.

    So, you could like to have a look at it! 🙂

  33. 33
    gpuccio says:

    Silver Asiatic at #30:

    I absolutely agree with what you say here! 🙂

  34. 34
    gpuccio says:

    Silver Asiatic at #31:

    Very good points.

    Yes, my argument is exactly that as the cell is more than a machine, and yet it implements the same type of functions as traditional machines do, only with much higher flexibility and complexity, it does require a lot more of intelligent design and engineering to be able to work.

    So, it is absolutely true that the researcher in that paper has made a greater point for Intelligent Design.

    But, of course, he (or they) will never admit such a thing! And we know very well why.

    So, the call to “self-organization”, or to “stochastic systems”.

    Of course, that’s simply mystification. And not even a good one.

    I will comment on the famous concept of “self-organization” in my next post.

  35. 35
    bornagain77 says:

    Per Gp 32, it is not enough, per falsification, to find examples that support your theory. In other words, I can find plenty of counterexamples.

  36. 36
    gpuccio says:

    Bornagain77 at #35:

    I am not sure that I understand what you mean.

    My theory? Falsification? Counterexamples?

    At #12 you quote a paper that says:

    “Similarity regression inherently quantifies TF motif evolution, and shows that previous claims of near-complete conservation of motifs between human and Drosophila are inflated, with nearly half of the motifs in each species absent from the other, largely due to extensive divergence in C2H2 zinc finger proteins.”

    OK?

    At #14 I agree with the paper, and add a comment:

    Indeed, divergence in TF sequences and motifs is certainly one of the main tools of specific transcription control in different organisms.

    A lot of superficial ideas about TFs is probably due to the rather strong conservation of known DNA binding domanis (DBDs). However, DBDs are only part of the story. The most interesting part of TF sequences is certainly to be found in the less conserved sequences and domains, even in intrinsically disordered sequences.”

    OK?

    At #29 I reference a paper about RelA, one of the TFs discussed in this OP, that shows a clear example of what I said at #14: homology of the DBD and divergence of the functional TADs between humans and cartilaginous fishes. Which is exactly what was stated in the paper you quoted.

    What is the problem? What am I missing?

  37. 37
    bornagain77 says:

    “What is the problem? What am I missing?”

    Could be me missing something. I thought you might, with your emphasis on conservation, be pushing for CD again.

  38. 38
    gpuccio says:

    Silver Asiatic and all:

    OK, a few words about the myth of “self organization”.

    You say:

    “But we don’t have enough data on how he (and others) believe self-organization really works as a substitute for Darwinian mechanisms, and that weakens support for Design in some ways.”

    It is perfectly true that we “don’t have enough data” about that. We don’t have them because there is none: “self organization” simply does not work as a substitute for Darwinian mechanisms. IOWs, it explain absolutely nothing about functional complexity (not that Darwinian mechanisms do, but at least they try).

    Let’s see. I would say that there is a correct concept of self-organization, and a completely mythological expansion of it to realities that have nothing to do with it.

    The correct concept of self-organization comes from physics and chemistry, essentially. It is the science behind systems that present some unexpected “order”deriving from the interaction of random components and physical laws.

    Examples:

    a) Physics: Heat applied evenly to the bottom of a tray filled with a thin sheet of viscous oil transforms the smooth surface of the oil into an array of hexagonal cells of moving fluid called Bénard convection cells

    b) Chenistry: A Belousov–Zhabotinsky reaction, or BZ reaction, is a nonlinear chemical oscillator, including bromine and an acid. These reactions are far from equilibrium and remain so for a significant length of time and evolve chaotically, being characterized by a noise-induced order.

    And so on.

    Now, the concept of self-organization has been artificially expanded to almost everything, including biology. But the phemomenon is essentially derived from this type of physical models.

    In general, in these examples, some stochastic system tends to achieve some more or less ordered stabilization towards what is called an attractor.

    Now, to make things simple, I will just mention a few important points that show how the application of those principles to biology is completely wrong.

    1) In all those well known physical systems, the system obeys the laws of physics, and the pattern that “emerges” can very well be explained as an interaction between those laws and some random component. Snowflakes are another example.

    2) The property we observe in these systems is some form of order. That is very important. It is the most important reason why self-organization has nothing to do with functional complexity.

    3) Functional complexity is the number of specific bits that are necessary to implement a function. It has nothing to do with a generic “order”. Take a protein that has an enzymatic activity, for example, and compare it to a snowflake. The snowflake has order, but no complex function. Its order can be explained by simple laws, and the differences between snowflakes can be explained by random differences in the conditions of the system. Instead, the function of a protein strictly depends on the sequence of AAs. It has nothing to do with random components, and it follows a very specific “recipe” coming from outside the system: the specific sequence in the protein, which in turn depends on the specific sequence of nucleotides in the protein coding gene. There is no way that such a specific sequence can be the result of “self-organization”. To believe that it is the result of Natural Selection is foolish, but at least it has some superficial rationale. But to believe that it can be the result of self-organization, of physical and chemical laws acting on random components, is total folly.

    4) The simple truth is that the sequence of AAs generates function according to chemical rules, but to find what sequence among all possible sequences will have the function requires deep understanding of the rules of chemistry, and extreme computational power. We still are not able to build functional proteins by a top down process. Bottom up processes are more efficient, but still require a lot of knowledge, computation power, and usually strictly guided artificial selection. Even so, we are completely unable to engineer anything like ATP synthase, as I have discussed in detail many times. Nor could ever RV + NS do that.

    But, certainly, no amount of “self-organization” in the whole reality could even begin to do such a thing.

    5) Complex networks like the one I have discussed here certainly elude our understanding in many ways. But one thing is certain: they do require tons of functional information at the level of the sequences in proteins and other parts of the genome to wortk correctly. As we have seen in the OP, mutations in different parts of the system are connected to extremely serious diseases. Of course, no self-organization of any kind can ever correct those small errors in digital functional information.

    6) The function of a protein is not an “emerging” quality of the protein any more than the function of a watch is an emerging quality of the gears. The function of a protein depends on a very precise correspondence between the digital sequence of AAs and the laws of biochemistry, which determines the folding and the final structure and status (or statuses) of the protein. This is information. The same information that makes the code for Excel a functional reality. Do we see codes for software emerging from self-organization? We should maybe inform video game programmers of that, they could spare a lot of work and time.

    In the end, all these debates about self-organizarion, emerging properties and snowflakes have nothing to do with functional information. The only objects that exhibit functional information beyond 500 bits are, still, human artifacts and biological objects. Nothing else. Not snowflakes, not viscous oil, not the game of life. Only human artifacts and biological objects.

    Those are the only objects in the whole known universe that exhibit thousands, millions, maybe billions of bits strictly aimed at implementing complex and obvious functions. The only existing instances of complex functional information.

  39. 39
    gpuccio says:

    Bornagain77;

    No. As you know, I absolutely believe in CD, but that is not the issue here. Homology is homology, and divergence is divergence, whatever the model we use to explain them.

    I just wanted to show an example of a protein (RelA), indeed a TF, where both homology (in the DBD) and divergence (in the TADs) are certainly linked to function.

    When I want to “push” for CD, I know how to do that.

  40. 40
    bornagain77 says:

    “I absolutely believe in CD”

    Interesting claim of absolute certainty from you given the discontinuous nature of the fossil record, the discontinuous nature of the genetic evidence, and the fact that no one has ever changed the basic body plan of an organism into another body plan.

    Perhaps, given your poverty of empirical warrant, a bit more modest measure of certainty would be wise on your part?

    For instance, it seems you are holding somewhat to a reductive materialistic framework in your ‘absolute’ certainty about CD, and yet, the failure of reductive materialism to be able to explain the basic form and/or body plan of any particular organism occurs at a very low level. Much lower than DNA itself.
    In the following article entitled ‘Quantum physics problem proved unsolvable: Gödel and Turing enter quantum physics’, which studied the derivation of macroscopic properties from a complete microscopic description, the researchers remark that even a perfect and complete description of the microscopic properties of a material is not enough to predict its macroscopic behaviour.,,, The researchers further commented that their findings challenge the reductionists’ point of view, as the insurmountable difficulty lies precisely in the derivation of macroscopic properties from a microscopic description.”

    Quantum physics problem proved unsolvable: Gödel and Turing enter quantum physics – December 9, 2015
    Excerpt: A mathematical problem underlying fundamental questions in particle and quantum physics is provably unsolvable,,,
    It is the first major problem in physics for which such a fundamental limitation could be proven. The findings are important because they show that even a perfect and complete description of the microscopic properties of a material is not enough to predict its macroscopic behaviour.,,,
    “We knew about the possibility of problems that are undecidable in principle since the works of Turing and Gödel in the 1930s,” added Co-author Professor Michael Wolf from Technical University of Munich. “So far, however, this only concerned the very abstract corners of theoretical computer science and mathematical logic. No one had seriously contemplated this as a possibility right in the heart of theoretical physics before. But our results change this picture. From a more philosophical perspective, they also challenge the reductionists’ point of view, as the insurmountable difficulty lies precisely in the derivation of macroscopic properties from a microscopic description.”
    http://phys.org/news/2015-12-q.....godel.html

    In other words, even with a complete microscopic description of an organism, it is impossible for you to have ‘absolute’ certainty about the macroscopic behavior of that organism much less to have ‘absolute’ certainty about CD.

  41. 41
    gpuccio says:

    Bornagain77:

    It’s amazing how much you misunderstand me, even if I have repeatedly tried to explain my views to you.

    1) “Interesting claim of absolute certainty from you given the discontinuous nature of the fossil record, the discontinuous nature of the genetic evidence, and the fact that no one has ever changed the basic body plan of an organism into another body plan.”

    Interesting claims, that have nothing to do with my belief in CD, and about which I can absolutely agree with you. I absolutely believe that the fossil record is discontinuous, that genetic evidence is discontinuous, and that no one has ever changed the basic body plan of an organism into another body plan. And so?

    2) “Perhaps, given your poverty of empirical warrant, a bit more modest measure of certainty would be wise on your part?”

    I don’t believe that scientific certanty is ever absolute. I use “absolutely” to express my strength of certainty that there is empirical warrant of CD. And I have explained why, many times, even to you. As I have explained many times to you what I mean by CD. But I am not sure that you really listen to me. That’s OK, I believe in free will, as you probably know.

    3) “For instance, it seems you are holding somewhat to a reductive materialistic framework in your ‘absolute’ certainty about CD, and yet, the failure of reductive materialism to be able to explain the basic form and/or body plan of any particular organism occurs at a very low level. Much lower than DNA itself.”

    I am in no way a reductionist, least of all a materialist. My certainty about CD only derives from scientific facts, and from what I believe to be the most reasonable way to interpret them. As I have tried to explain many times.

    Essentially, the reasons why I believe in CD (again, the type of CD that I believe in, and that I have tried to explain to you many times) are essentially of the same type for which I believe in Intelligent Design. There is nothing reductionist or materialist in them. Only my respect for facts.

    For example, I do believe that we do not understand at all how body plans are implemented. You seem to know more. I am happy for you.

    4) “In other words, even with a complete microscopic description of an organism, it is impossible for you to have ‘absolute’ certainty about the macroscopic behavior of that organism much less to have ‘absolute’ certainty about CD.”

    I have just stated that IMO we don’t understand at all how body plans are implemented. Moreover, I don’t believe at all that we have any complete microscopic description of any living organsism. We are absolutely (if you allow the word) distant from that. OK. But I still don’t understand what that has to do with CD.

    For the last time: CD, for me, just means that there is very strong evidence that the molecular information in DNA and proteins of already existing species is physically passed on to new species that by design derive from them. All the new information is designed in the process, but the old information is physically passed on. As proven by the differences in neutral sites, between species.

    I hope this is the last time I have to tell you that.

  42. 42
    bornagain77 says:

    “For the last time: CD, for me, just means that there is very strong evidence that the molecular information in DNA and proteins of already existing species is physically passed on to new species that by design derive from them. All the new information is designed in the process, but the old information is physically passed on. As proven by the differences in neutral sites, between species.
    I hope this is the last time I have to tell you that.”

    To this in particular,,, “passed on to new species that by design derive from them. All the new information is designed in the process, but the old information is physically passed on.”

    All new information is ‘designed in the process”???? Please elaborate on exactly what process you are talking about.

    As to examples that falsify the common descent model:

    Third, for the actual, real species data, the dependency graph model is astronomically superior compared to the common descent model.

    New Paper by Winston Ewert Demonstrates Superiority of Design Model – Cornelius Hunter – July 20, 2018
    Excerpt: Ewert’s three types of data are: (i) sample computer software, (ii) simulated species data generated from evolutionary/common descent computer algorithms, and (iii) actual, real species data.
    Ewert’s three models are: (i) a null model which entails no relationships between any species, (ii) an evolutionary/common descent model, and (iii) a dependency graph model.
    Ewert’s results are a Copernican Revolution moment. First, for the sample computer software data, not surprisingly the null model performed poorly. Computer software is highly organized, and there are relationships between different computer programs, and how they draw from foundational software libraries. But comparing the common descent and dependency graph models, the latter performs far better at modeling the software “species.” In other words, the design and development of computer software is far better described and modeled by a dependency graph than by a common descent tree.
    Second, for the simulated species data generated with a common descent algorithm, it is not surprising that the common descent model was far superior to the dependency graph. That would be true by definition, and serves to validate Ewert’s approach. Common descent is the best model for the data generated by a common descent process.
    Third, for the actual, real species data, the dependency graph model is astronomically superior compared to the common descent model.
    Where It Counts
    Let me repeat that in case the point did not sink in. Where it counted, common descent failed compared to the dependency graph model. The other data types served as useful checks, but for the data that mattered — the actual, real, biological species data — the results were unambiguous.
    Ewert amassed a total of nine massive genetic databases. In every single one, without exception, the dependency graph model surpassed common descent.
    Darwin could never have even dreamt of a test on such a massive scale. Darwin also could never have dreamt of the sheer magnitude of the failure of his theory. Because you see, Ewert’s results do not reveal two competitive models with one model edging out the other.
    We are not talking about a few decimal points difference. For one of the data sets (HomoloGene), the dependency graph model was superior to common descent by a factor of 10,064. The comparison of the two models yielded a preference for the dependency graph model of greater than ten thousand.
    Ten thousand is a big number. But it gets worse, much worse.
    Ewert used Bayesian model selection which compares the probability of the data set given the hypothetical models. In other words, given the model (dependency graph or common descent), what is the probability of this particular data set? Bayesian model selection compares the two models by dividing these two conditional probabilities. The so-called Bayes factor is the quotient yielded by this division.
    The problem is that the common descent model is so incredibly inferior to the dependency graph model that the Bayes factor cannot be typed out. In other words, the probability of the data set, given the dependency graph model, is so much greater than the probability of the data set given the common descent model, that we cannot type the quotient of their division.
    Instead, Ewert reports the logarithm of the number. Remember logarithms? Remember how 2 really means 100, 3 means 1,000, and so forth?
    Unbelievably, the 10,064 value is the logarithm (base value of 2) of the quotient! In other words, the probability of the data on the dependency graph model is so much greater than that given the common descent model, we need logarithms even to type it out. If you tried to type out the plain number, you would have to type a 1 followed by more than 3,000 zeros. That’s the ratio of how probable the data are on these two models!
    By using a base value of 2 in the logarithm we express the Bayes factor in bits. So the conditional probability for the dependency graph model has a 10,064 advantage over that of common descent.
    10,064 bits is far, far from the range in which one might actually consider the lesser model. See, for example, the Bayes factor Wikipedia page, which explains that a Bayes factor of 3.3 bits provides “substantial” evidence for a model, 5.0 bits provides “strong” evidence, and 6.6 bits provides “decisive” evidence.
    This is ridiculous. 6.6 bits is considered to provide “decisive” evidence, and when the dependency graph model case is compared to comment descent case, we get 10,064 bits.
    But It Gets Worse
    The problem with all of this is that the Bayes factor of 10,064 bits for the HomoloGene data set is the very best case for common descent. For the other eight data sets, the Bayes factors range from 40,967 to 515,450.
    In other words, while 6.6 bits would be considered to provide “decisive” evidence for the dependency graph model, the actual, real, biological data provide Bayes factors of 10,064 on up to 515,450.
    We have known for a long time that common descent has failed hard. In Ewert’s new paper, we now have detailed, quantitative results demonstrating this. And Ewert provides a new model, with a far superior fit to the data.
    https://evolutionnews.org/2018/07/new-paper-by-winston-ewert-demonstrates-superiority-of-design-model/

    Response to a Critic: But What About Undirected Graphs? – Andrew Jones – July 24, 2018
    Excerpt: The thing is, Ewert specifically chose Metazoan species because “horizontal gene transfer is held to be rare amongst this clade.” Likewise, in Metazoa, hybridization is generally restricted to the lower taxonomic groupings such as species and genera — the twigs and leaves of the tree of life. In a realistic evolutionary model for Metazoa, we can expect to get lots of “reticulation” at lower twigs and branches, but the main trunk and branches ought to have a pretty clear tree-like form. In other words, a realistic undirected graph of Metazoa should look mostly like a regular tree.
    https://evolutionnews.org/2018/07/response-to-a-critic-but-what-about-undirected-graphs/

    This Could Be One of the Most Important Scientific Papers of the Decade – July 23, 2018
    Excerpt: Now we come to Dr. Ewert’s main test. He looked at nine different databases that group genes into families and then indicate which animals in the database have which gene families. For example, one of the nine databases (Uni-Ref-50) contains more than 1.8 million gene families and 242 animal species that each possess some of those gene families. In each case, a dependency graph fit the data better than an evolutionary tree.
    This is a very significant result. Using simulated genetic datasets, a comparison between dependency graphs and evolutionary trees was able to distinguish between multiple evolutionary scenarios and a design scenario. When that comparison was done with nine different real genetic datasets, the result in each case indicated design, not evolution. Please understand that the decision as to which model fit each scenario wasn’t based on any kind of subjective judgement call. Dr. Ewert used Bayesian model selection, which is an unbiased, mathematical interpretation of the quality of a model’s fit to the data. In all cases Dr. Ewert analyzed, Bayesian model selection indicated that the fit was decisive. An evolutionary tree decisively fit the simulated evolutionary scenarios, and a dependency graph decisively fit the computer programs as well as the nine real biological datasets.
    http://blog.drwile.com/this-co.....he-decade/

    Why should mitochondria define species? – 2018
    Excerpt: The particular mitochondrial sequence that has become the most widely used, the 648 base pair (bp) segment of the gene encoding mitochondrial cytochrome c oxidase subunit I (COI),,,,
    The pattern of life seen in barcodes is a commensurable whole made from thousands of individual studies that together yield a generalization. The clustering of barcodes has two equally important features: 1) the variance within clusters is low, and 2) the sequence gap among clusters is empty, i.e., intermediates are not found.,,,
    Excerpt conclusion: , ,The simple hypothesis is that the same explanation offered for the sequence variation found among modern humans applies equally to the modern populations of essentially all other animal species. Namely that the extant population, no matter what its current size or similarity to fossils of any age, has expanded from mitochondrial uniformity within the past 200,000 years.,,,
    https://phe.rockefeller.edu/news/wp-content/uploads/2018/05/Stoeckle-Thaler-Final-reduced.pdf

    Sweeping gene survey reveals new facets of evolution – May 28, 2018
    Excerpt: Darwin perplexed,,,
    And yet—another unexpected finding from the study—species have very clear genetic boundaries, and there’s nothing much in between.
    “If individuals are stars, then species are galaxies,” said Thaler. “They are compact clusters in the vastness of empty sequence space.”
    The absence of “in-between” species is something that also perplexed Darwin, he said.
    https://phys.org/news/2018-05-gene-survey-reveals-facets-evolution.html

  43. 43
    gpuccio says:

    Bornagain77 at #42:

    “All new information is ‘designed in the process”???? Please elaborate on exactly what process you are talking about.”

    It should be clear. However, let’s try again.

    Let’s say that there are 3 main models for how functional information comes into existence in biological beings.

    a) Descent with modifications generated by RV + NS: this is the neo-darwinian model. I absoluetly (if you allow the word) reject it. So do you, I suppose.

    b) Descent with designed modifications: this is my model. This is the process I refer to: a process of design, of engineering, which derives new species from what already exists.

    The important point, that justifies the term “descent”, is that, as I have said, the old information that is appropriate is physically passed on from the ancestor to the new species. All the rest, the new functional information, is engineered in the design process.

    So, to be more clear, let’s say that species B appears in natural history at time T. Before it, there exists another species, A, which has some strong similarities to species B.

    Let’s say that, according to my model, species B derives physically from the already existing species A. How doe it happen?

    Let’s say that, just as an imaginary example, A and B share about 50% of protein coding genes. The proteins coded by these genes are very similar in the two species, almost identical, at least at the beginning. The reason for that is that the function implemented by those proteins in the two species are extremely similar.

    But that is only part of the game. Of course, B has a lot of new proteins, or parts of proteins, or simply regulatory parts of the genome, that are not the same as A at all. Those sequences are absolutely funtional, but they do things that are specific to B, and do not exist in A, In the same way, many specific functions of A are not needed in B, and so they are not implemented there.

    Now, losing some proteins or some functions is not so difficult. We know that losing information is a very easy task, and requires no special ability.

    But how does all that new functional information arise in B? It did not exist in A, or in any other living organism that existed before to time T. It arises in B for the first time, and approximately at time T.

    The obvious answer, in my model, is: it is newly designed functional information. If I did not believe that, I would be in the other field, and not here in ID.

    But the old information, the sequence information that retains its function from A to B? Well, in my model, very simply, it is physically passed on from A to B.That is the meaning of descent in my model. That’s what makes A an ancestor of B, even if a completely new process of design and engineering is necessary to derive B from A.

    Now, you may ask: how does that happen? Of course, we don’t know the details, but we know three important facts:

    1) There are signatures of neutral variation in the conserved sequences, grossly proportional to the evolutionary time split, that demonstrate that they are physically passed on. This is the single most important argument in favour of descent, and I am aware of no possible explanation of this fact outside of physical descent of those sequences.

    2) The new functional information arises often in big jumps, and is almost always very complex. For the origin of vertebrates, I have computed about 1.7 million bits of new functional information, arising is at most 20 million years. RV + NS couild never do that, because it totally lacks the necessary probabilistic resources.

    3) The fossil record and the existing genomes and proteomes show no trace of the many functional intermediates that would be necessary for RV + NS to even try something. Therefore, RV + NS did not do it, because there is no trace of what should absolutely be there.

    So, how did design do it, with physical descent?

    Let’s say that we can imagine us doing it. If we were able. What would we do?

    It’s very simple: we would take a few specimens of A, bring them to some lab of ours, and work on them to engineer the new species with our powerful means of genetic engineering. Adding the new functional information to what already exists, and can still be functional in the new project.

    Where? And in what time?

    These are good questions. They are good questions in any case, even if you stick to your (I think) model, model c, soon to be described.

    Because species B does appear at time T. And that must happen somewhere. And that must happen in some time window.

    But the details are still to be understood. We know too little.

    But one thing is certain: both space and time are somehow restricted.

    Space is restricted, because of course the new species must appear somewhere. It does not appear at once all over the globe.

    But there is more. Model a, the neo-darwinian model, needs a process that takes place almost everywhere. Why? Because it badly needa as many probabilistic resources as possible. IOWs, it badly needs big numbers.

    Of course, we know very well that no reasonable big number will do. The probabilstic resources simply are not there. Even for bacteria crowding the whole planet for 5 billion years.

    But with small populations, any thought if RV and NS is blatantly doomed from the beginning.

    But design does not work that way. Design does not need big numbers, big populations. Especially if it is mainly top down engineering.

    So, we could very well engineer B working on a relativel small sample of A. In our lab.

    In what time? I really don’t know, but certainly not too much. As you well know, those information jumps are rather sudden in natural history, This is a fact.

    So? I minute? 1 year? 1 million year? Interesting questions, but in the end it is not much anyway.

    Not instantaneously, I would say. Not in model b, anyway. If it is an engineering process, it needs time, anyway.

    So, what is important about this model?

    Simply that it is the best model that explains facts.

    1) The signatures of neutral variation in conserved sequences are perfectly explained. As those sequences have been passed on as they are fron A to B, they keep those signatures. IOWs, if A has existed for 100 million years from some previous split, in those 100 milllion years neutral variation happens in the sequence, and differentiates that sequence in A from some homologous sequence in A1 (the organsim derived from that old split). So, B inherits those changes from A, and if we compare B and A1, we find those differences, as we find them if we compare A and A1. The differences in B are inherited from A as it was 100 million years after the split from A1.

    2) The big jumps in functional information are, of course, explained by the design process, the only type of process that can do those things.

    3) There is no need for functional intermediates in the fossil record or in the genomes. What happens in the lab does not leave traces. We do not need big intermediate populations to be expanded by positive NS, to gain new huge probabilistic resources (as in model a). We just need a few samples, a few intermediates, in a limited time and space. There is no reason to expect any relevant trace from that process.

    Of course, the new engineered species, when it is ready and working, is released into the generla environment. IOWs, it is “published”. That’s what we observe in the fossil record, and in the genomes: the release of the new engineered species. Nothing else.

    So, model b, my model, explains all three types of observed facts.

    c) No descent at all. This is, I believe, your model.

    What does that mean?

    Well, it can mean sudden “creation” (if the new species appears of of thin air, from nothing), or, more reasonably, engineering from scratch.

    I will not discuss the “creation” aspect. I would not know what to say, from a scientific point of view.

    But I will discuss the “engineering from scratch” model.

    However it is conceived (quick or slow, sudden or gradual), it implies one simple thing: each time, everything is re-engineered from scratch. Even what had already been engineered in previously existing species.

    From what? It’s simple. If it is not creation ex nihilo, “scratch” here can mean only one thing: from inanimated matter.

    IOWs, it means re-doing OOl each time a new species originates.

    OK, I believe there are many arguments against that model, but I will just state here the simplest: it does not explain fact 1)

    Moreover, I would definitely say that all your arguments against descent, however good (IMO, some are good, some are not) are always arguments agains model a). They have no relevance at all against model b), my model.

    Once and for all, I absolutely (if you allow the word) reject model a).

    That said, I am rather sure that you will stick to your model, model c). That’s fine for me. But I wanted to clarify as much as possible.

  44. 44
    bornagain77 says:

    What is the falsification criteria of your model? It seems you are lacking a rigid criteria. Not to mention lacking experimental warrant that what you propose is even possible.

    “No descent at all. This is, I believe, your model.”

    I do not believe in UCD, but I do believe in diversification from an initially created “kind” by devolutionary processes. i.e. Behe “Darwin Devolves” and Sanford “Genetic Entropy”.

    I note, especially in the Cambrian, we are talking gargantuan jumps in the fossil record. Your model is not parsimonious to such gargantuan jumps.

    Moreover, your genetic evidence is not nearly as strong as you seem to think it is. And even if it were, it is not nearly enough to explain ‘biological form’. For that you need to incorporate recent finding from quantum biology:

    How Quantum Mechanics and Consciousness Correlate – video (how quantum information theory and molecular biology corelate – 23 minute mark)
    https://www.youtube.com/watch?v=4f0hL3Nrdas

    Darwinian Materialism vs. Quantum Biology – Part II – video
    https://www.youtube.com/watch?v=oSig2CsjKbg

  45. 45
    bornagain77 says:

    correct time mark is 27 minute mark

    How Quantum Mechanics and Consciousness Correlate – video (how quantum information theory and molecular biology correlate – 27 minute mark)
    https://youtu.be/4f0hL3Nrdas?t=1634

  46. 46
    gpuccio says:

    Bornagain77:

    I quote myself:

    “That said, I am rather sure that you will stick to your model, model c). That’s fine for me. But I wanted to clarify as much as possible.”

    The only thing in my model that explains biological form is design. Maybe it is not enough, but it is certainly necessary.

    I want to be clear: I agree with you about the importance of consciousness and of quantum mechanics. But what has that to do with my argument?

    Do you believe that functional information is designed? I do. Design comes from consciousness. Consciousness interacts with matter through some quantum interface. That’s exactly what I believe.

    My model is not parsimonious and requires gargantuan jumps? Is it worse than the initial creation of kinds?

    However, for me we can leave it at that. As explained, I was not even implying CD in my initial discussion here.

  47. 47
    bornagain77 says:

    as to:

    1) There are signatures of neutral variation in the conserved sequences, grossly proportional to the evolutionary time split, that demonstrate that they are physically passed on. This is the single most important argument in favour of descent, and I am aware of no possible explanation of this fact outside of physical descent of those sequences.,,,
    OK, I believe there are many arguments against that model, but I will just state here the simplest: it does not explain fact 1)

    Again, the argument is not nearly as strong as you seem to think it is: Particularly You could say that the heart of this “shared error” argument is the idea that “lightning doesn’t strike twice.” The identical, harmful mutations, in different species, could not have arisen independently. Instead they must have arisen only once, and then were inherited from a common ancestor.

    The problem, of course, there is no reason to make this assumption. The logic made sense for written documents, but the species are not ancient manuscripts or homework assignments. They are species, and species are different.

    In fact repeated designs found in otherwise distant species are ubiquitous in biology. Listening to evolutionists one would think the species fall into an evolutionary pattern with a few minor exceptions here and there. But that is overwhelmingly false. From the morphological to the molecular level, repeated designs are everywhere, and they take on many different forms.

    Shared Errors: An Open Letter to BioLogos on the Genetic Evidence, Cont.
    Cornelius Hunter – June 1, 2016
    In recent articles (here, here and here) I have reviewed BioLogos Fellow Dennis Venema’s articles (here, here and here) which claimed that (1) the genomes of different species are what we would expect if they evolved, and (2) in particular the human genome is compelling evidence for evolution.

    Venema makes several confident claims that the scientific evidence strongly supports evolution. But as I pointed out Venema did not reckon with an enormous body of contradictory evidence. It was difficult to see how Venema could make those claims. Fortunately, however, we were able to appeal to the science. Now, as we move on to Venema’s next article, that will all change.

    In this article, Venema introduces a new kind of genetic evidence for evolution. Again, Venema’s focus is on, but not limited to, human evolution. Venema’s argument is that harmful mutations shared amongst different species, such as the human and chimpanzee, are powerful and compelling evidence for evolution. These harmful mutations disable a useful gene and, importantly, the mutations are identical.

    Are not such harmful, shared mutations analogous to identical typos in the term papers handed in by different students, or in historical manuscripts? Such typos are telltale indicators of a common source, for it is unlikely that the same typo would have occurred independently, by chance, in the same place, in different documents. Instead, the documents share a common source.

    Now imagine not one, but several such typos, all identical, in the two manuscripts. Surely the evidence is now overwhelming that the documents are related and share a common source.

    And just as a shared, identical, typos are a telltale indicator of a common source, so too must shared harmful mutations be proofs of a common ancestor. It is powerful and compelling evidence for common descent. It is, explains Venema, “one of the strongest pieces of evidence in favor of common ancestry between humans and chimpanzees (and other organisms).”

    There is only one problem. As we have explained so many times, the argument is powerful because the argument is religious. This isn’t about science.

    The Evidence Does Not Support the Theory

    The first hint of a problem should be obvious: harmful mutations are what evolution is supposed to kill off. The whole idea behind evolution is that improved designs make their way into the population via natural selection, and by the same logic natural selection (or purifying selection in this case) filters out the harmful changes. Therefore finding genetic sequence data that must be interpreted as harmful mutations weighs against evolutionary theory.

    Also, there is the problem that any talk of how a gene proves evolutionary theory is avoiding the problem that evolution fails to explain how genes arose in the first place. Evolution claiming proof in the details of gene sequences seems to be putting the cart before the horse.

    No Independent Changes

    You could say that the heart of this “shared error” argument is the idea that “lightning doesn’t strike twice.” The identical, harmful mutations, in different species, could not have arisen independently. Instead they must have arisen only once, and then were inherited from a common ancestor.

    The problem, of course, there is no reason to make this assumption. The logic made sense for written documents, but the species are not ancient manuscripts or homework assignments. They are species, and species are different.

    In fact repeated designs found in otherwise distant species are ubiquitous in biology. Listening to evolutionists one would think the species fall into an evolutionary pattern with a few minor exceptions here and there. But that is overwhelmingly false. From the morphological to the molecular level, repeated designs are everywhere, and they take on many different forms.

    The problem is that these repeated designs appear in species so distant that, according to evolutionary theory, their common ancestor could not have had that design. The human and squid have similar vision systems, but their purported common ancestor, a much simpler and more ancient organism, would have had no such vision system. Evolutionists are forced to say that incredibly complex designs must have arisen, yes, repeatedly and independently.

    And this must have occurred over and over in biology. It would be a challenge simply to document all of the instances in which evolutionists agreed to an independent origins. For evolutionists then to insist that similar designs in allied species can only be explained by common descent amounts to having it both ways.

    Bad Designs

    This “shared error” argument also relies on the premise that the structures in question are bad designs. In this case, the mutations are “harmful,” and so the genes are “broken.” And while that may well be true, it is a premise with a very bad track record. The history of evolutionary thought is full of claims of bad, inefficient, useless designs which, upon further research were found to be, in fact, quite useful. Simply from a history of science perspective, this is a dangerous argument to be making.

    Epicureanism

    The “shared error” argument is bad science and bad history, but it remains a very strong argument. This is because its strength does not come from science or history, but rather from religion. As I have explained many times, evolution is a religious theory, and the “shared error” argument is no different. This is why the scientific and historical problems don’t matter. Venema explains:

    The fact that different mammalian species, including humans, have many pseudogenes with multiple identical abnormalities (mutations) shared between them is a problem for any sort of non-evolutionary, special independent creation model.

    This is a religious argument, evolution as a referendum on a “special independent creation model.” It is not that the species look like they arose by random chance, it is that they do not look like they were created. Venema and the evolutionists are certain that God wouldn’t have directly created this world. There must be something between the Creator and creation — a Plastik Nature if you will. And if Venema and the evolutionists are correct in their belief then, yes, evolution must be true. Somehow, some way, the species must have arisen naturalistically.

    This argument is very old. In antiquity it drove the Epicureans to conclude the world must have arisen on its own by random motion. Today evolutionists say the same thing, using random mutations as their mechanism.

    Needed: An Audit

    Darwin’s book was loaded with religious arguments. They were the strength of his otherwise weak thesis, and they have always been the strength behind evolutionary thought. No longer can we appeal to the science, for it is religion that is doing the heavy lifting.

    Yet evolutionists claim the high ground of objective, empirical reasoning. Venema admits that some other geneticists do not agree with this “shared error” argument but, he warns, they do so “for religious reasons.”

    We have also seen this many times. Evolutionists make religious claims and literally in the next moment lay the blame on the other guy. This is the world according to the Warfare Thesis. We need an audit of our thinking.
    https://evolutionnews.org/2016/06/shared_errors_a/

    and

    In Arguments for Common Ancestry, Scientific Errors Compound Theoretical Problems
    Evolution News | @DiscoveryCSC
    May 16, 2016
    (6) Swamidass points to pseudogenes as evidence for common ancestry, even though many pseudogenes show evidence of function, including the vitellogenin pseudogene that Swamidass cites.

    Swamidass repeatedly cites Dennis Venema’s arguments for common ancestry based upon pseudogenes. However, as we’ve discussed here in the past, quite a few pseudogenes have turned out to be functional, and we’re discovering more all the time. It’s only recently that we’ve had the technology to study the functions of pseudogenes, so we are just at the beginning of doing so. While it’s true that there’s a lot about pseudogenes we still don’t know, an RNA Biology paper observes, “The study of functional pseudogenes is just at the beginning.” And it predicts that “more and more functional pseudogenes will be discovered as novel biological technologies are developed in the future.” The paper concludes that functional pseudogenes are “widespread.” Indeed, when we carefully study pseudogenes, we often do find function. One paper in Annual Review of Genetics tellingly observed: “Pseudogenes that have been suitably investigated often exhibit functional roles.”

    One of Swamidass’s central examples mirrors Dennis Venema’s argument that the vitellogenin pseudogene in humans demonstrates we’re related to egg-laying vertebrates like fish or reptiles. But a Darwin-doubting scientist was willing to dig deeper. Good genetic evidence now indicates that what Dennis Venema calls the “human vitellogenin pseudogene” is really part of a functional gene, as one technical paper by an ID-friendly creationist biologist has shown.
    https://evolutionnews.org/2016/05/in_arguments_fo/

  48. 48
    gpuccio says:

    Bornagain77:

    My argument is not about shared errors. It is about neutral mutations at neutral sites, grossly proportional to evolutionary split times. It is about the ka/ks ratio and the saturation of neutral sites after a few hundred million years. I have made the argument in great detail in the past, with examples, but I have no intention to repeat all the work now.

    By the way, I would be cautious in accepting everything that Cornelius Hunter says, as you seem to do. I agree with him many times. But many other times I fully disagree.

  49. 49
    bornagain77 says:

    “I would be cautious in accepting everything that Cornelius Hunter says, as you seem to do. I agree with him many times. But many other times I fully disagree.”

    Like when he contradicts you? 🙂

    Though you tried to downplay it, your argument from supposedly ‘neutral variations’ is VERY similar to the shared error argument. As such, for reasons listed above, it is not nearly as strong as you seem to presuppose.

    It is apparent that you believe the variations were randomly generated and therefore you are basically claiming that “lightning doesn’t strike twice”, which is exactly the argument that Dr. Hunter critiqued.

    Moreover, If anything we now have far more evidence of mutations being ‘directed’ than we do of them being truly random.

    You said you could think of no other possible explanation, I hold that directed mutations are a ‘other possible explanation’ that is far more parsimonious to the overall body of evidence than your explanation of a Designer, i.e. God, creating a brand new species without bothering to correct supposed neutral variations and/or supposed shared errors.

  50. 50
    gpuccio says:

    Bornagain77:

    I disagree with Cornelius Hunter when I think he is wrong. In that sense, I treat him like anyone else. You seem to believe that he is always right. I don’t. Many times I have found that he is wrong in what he says.

    And no, my argument about neutral variation has nothing to do wiith the argument of shared errors. And with the idea that “lightning doesn’t strike twice”. My argument is about differences, not similarities, I think you don’t understand it. But that’s not a problem.

  51. 51
    bornagain77 says:

    No, I do not think Dr. Cornelius Hunter is ALWAYS right. But I certainly think he is right in his critique of Swamidass. Whereas I don’t think you are always wrong. I just think you are, in this instance, severely mistaken in one or more of your assumptions behind your belief in common descent.

    Your model is, from what I can tell, severely convoluted. If you presuppose randomness in your model at any instance prior to the design input from God to create a new family of species.,, that is one false assumption that would undermine your claim. I can provide references if need be.

  52. 52
    gpuccio says:

    To all:

    As usual, the levels of regulation and crosstalk of this NF-kB system grow each time I make a Pubmed search.

    We are all interested, of course, in long non coding RNAs. Well, this paper is about their role in NF-kB signaling:

    Lnc-ing inflammation to disease

    https://www.ncbi.nlm.nih.gov/pubmed/28687714

    Abstract
    Termed ‘master gene regulators’ long ncRNAs (lncRNAs) have emerged as the true vanguard of the ‘noncoding revolution’. Functioning at a molecular level, in most if not all cellular processes, lncRNAs exert their effects systemically. Thus, it is not surprising that lncRNAs have emerged as important players in human pathophysiology. As our body’s first line of defense upon infection or injury, inflammation has been implicated in the etiology of several human diseases. At the center of the acute inflammatory response, as well as several pathologies, is the pleiotropic transcription factor NF-??. In this review, we attempt to capture a summary of lncRNAs directly involved in regulating innate immunity at various arms of the NF-?? pathway that have also been validated in human disease. We also highlight the fundamental concepts required as lncRNAs enter a new era of diagnostic and therapeutic significance.

    The paper, unfortunately, is not open access. It is interesting, however, than lncRNAs are now considered “‘master gene regulators”.

  53. 53
    gpuccio says:

    Bornagain77:

    OK, it’s too easy to be right in criticizing Swamidass! 🙂 (Just joking, just joking… but not too much)

    Just to answer you observations about randomness: I think that most mutations are random, unless they are guided by design. I am not sure that I understand what your point is. Do you believe they are guided? I also believe that some mutations are guided, but that is a form of design.

    If they are not guided, how can you describe the system? If you cannot describe it in terms of necessity (and I don’t think you can), some probability distribution is the only remaining option. Again, I don’t understand what you really mean.

    But of course the mutations (if they are mutations) that generate new functional information are not random at all. they must be guided, or intelligently selected.

    As you know, I cannot debate God in this context. I can only do what ID theory allows is to do: recognize events where a design inference is absolutely (if you allow the word) warranted.

  54. 54
    gpuccio says:

    Bornagain77:

    Moreover, the mechanisms described by Behe in Darwin devolves are the known mechanisms of NS. They can certainly create some diversification, but essentially they give limited advantages in very special contextx, and they are essentially very simple forms of variation, They cannot certainly explain the emergence of new species, least of all explain the emergence of new comples functional information, like new functional proteins.

    So, do you believe that all relevant functional information is generated when “kinds” are created? And when would that happen?

    Just to understand.

  55. 55
    bornagain77 says:

    Gp states

    I think that most mutations are random,

    And yet the vast majority of mutations are now known to be ‘directed’

    :How life changes itself: the Read-Write (RW) genome. – 2013
    Excerpt: Research dating back to the 1930s has shown that genetic change is the result of cell-mediated processes, not simply accidents or damage to the DNA. This cell-active view of genome change applies to all scales of DNA sequence variation, from point mutations to large-scale genome rearrangements and whole genome duplications (WGDs). This conceptual change to active cell inscriptions controlling RW genome functions has profound implications for all areas of the life sciences.
    http://www.ncbi.nlm.nih.gov/pubmed/23876611

    WHAT SCIENTIFIC IDEA IS READY FOR RETIREMENT? Fully Random Mutations – Kevin Kelly – 2014
    Excerpt: What is commonly called “random mutation” does not in fact occur in a mathematically random pattern. The process of genetic mutation is extremely complex, with multiple pathways, involving more than one system. Current research suggests most spontaneous mutations occur as errors in the repair process for damaged DNA. Neither the damage nor the errors in repair have been shown to be random in where they occur, how they occur, or when they occur. Rather, the idea that mutations are random is simply a widely held assumption by non-specialists and even many teachers of biology. There is no direct evidence for it.
    On the contrary, there’s much evidence that genetic mutation vary in patterns. For instance it is pretty much accepted that mutation rates increase or decrease as stress on the cells increases or decreases. These variable rates of mutation include mutations induced by stress from an organism’s predators and competition, and as well as increased mutations brought on by environmental and epigenetic factors. Mutations have also been shown to have a higher chance of occurring near a place in DNA where mutations have already occurred, creating mutation hotspot clusters—a non-random pattern.
    http://edge.org/response-detail/25264

    Duality in the human genome – November 28, 2014
    Excerpt: According to the researchers, mutations of genes are not randomly distributed between the parental chromosomes. They found that 60 percent of mutations affect the same chromosome set and 40 percent both sets. Scientists refer to these as cis and trans mutations, respectively. Evidently, an organism must have more cis mutations, where the second gene form remains intact. “It’s amazing how precisely the 60:40 ratio is maintained. It occurs in the genome of every individual – almost like a magic formula,” says Hoehe.
    http://medicalxpress.com/news/.....enome.html

    i.e. Directed mutations are ‘another possible explanation’.

    As to, “do you believe that all relevant functional information is generated when “kinds” are created? And when would that happen?”

    I believe in ‘top down’ creation of ‘kinds’ with genetic entropy, as outlined by Sanford and Behe, following afterwards. As to exactly where that line should be, Behe has recently revised his estimate:

    “I now believe it, (the edge of evolution), is much deeper than the level of class. I think is actually goes down to the level of family”
    Michael Behe: Darwin Devolves – video – 2019
    https://www.youtube.com/watch?v=zTtLEJABbTw
    In this bonus footage from Science Uprising, biochemist Michael Behe discusses his views on the limits of Darwinian explanations and the evidence for intelligent design in biology.

    I note that my model is Theologically modest in that I hold to traditional concepts of the omniscience of God and God creating ‘kinds’ that reproduce after themselves, whereas, humorously, your model is all over the place Theologically speaking.

    Your model, Theologically speaking, humorously reminds me of this old Johnny Cash song:

    JOHNNY CASH – ONE PIECE AT A TIME – CADILLAC VIDEO
    https://www.youtube.com/watch?v=Hb9F2DT8iEQ

  56. 56
    gpuccio says:

    Bornagain77:

    Most mutations are random. There can be no doubt about that. Of course, that does not exclude that some are directed. A directed mutation is an act of design.

    I perfectly agree with Behe that the level of necessary design intervention is at least at the family level.

    The three quotes you give have nothing to do with directed mutations and design. In particular, the author if the second one is frankly confused. He writes:

    Rather, the idea that mutations are random is simply a widely held assumption by non-specialists and even many teachers of biology. There is no direct evidence for it.

    On the contrary, there’s much evidence that genetic mutation vary in patterns. For instance it is pretty much accepted that mutation rates increase or decrease as stress on the cells increases or decreases. These variable rates of mutation include mutations induced by stress from an organism’s predators and competition, and as well as increased mutations brought on by environmental and epigenetic factors. Mutations have also been shown to have a higher chance of occurring near a place in DNA where mutations have already occurred, creating mutation hotspot clusters—a non-random pattern.

    This is simple ignorance. The existence of patterns does not mean that a system is not probabilistic. It just means that there are also necessity effects.

    He makes his error clear saying:

    “Mutations have also been shown to have a higher chance of occurring near a place in DNA where mutations have already occurred, creating mutation hotspot clusters—a non-random pattern.”

    Now, “a higher chance” is of course a probabilistic statement. A random distribution is not a distribution where all events have the same probability to happen. That is called a uniform probability distribution. If some events (like mutations near a place where mutations have already occurred) have a higher probability to occur, that is still a random distribution, one where the probability of the events is not uniform.

    Things become even worse. He writes:

    “While we can’t say mutations are random, we can say there is a large chaotic component, just as there is in the throw of a loaded dice. But loaded dice should not be confused with randomness because over the long run—which is the time frame of evolution—the weighted bias will have noticeable consequences.”

    But of course a loaded dice is a random system. Let’s say that the dice is loaded so that 1 has a higher probability to occur. So the probabilities of the six possible events, instead of being all 1/6 (uniform distribution), are, for example, 0.2 for 1 and 0.16 for all the other outcomes.

    So, the dice is loaded. And so? Isn’t that a random system?

    Of course it is. Each event is completely probabilitstic: we cannot anticipate it with a necessity rule. But the event one is more probable than the others.

    That article is simply a pile of errors and confusion. Whoever understands something about probability can easily see that.

    Unfortunately you tend to quote a lot of things, but it seems that not always you evaluate them critically.

    Again, I propose: let’s leave it at that, This discussion does not seem to lead anywhere.

  57. 57
    bornagain77 says:

    I, of course, disagree with you.

    The third article,,, “According to the researchers, mutations of genes are not randomly distributed between the parental chromosomes. They found that 60 percent of mutations affect the same chromosome set and 40 percent both sets.,,, “It’s amazing how precisely the 60:40 ratio is maintained. It occurs in the genome of every individual – almost like a magic formula,” says Hoehe.”

    That is fairly straightforward. And again, Directed mutations are ‘another possible explanation’. Your ‘convoluted’ model is not nearly as robust as you have presupposed.

  58. 58
    hazel says:

    Good post at 56, gp.

    Also, it is my understanding that when someone says “mutations are random” they mean there is no causal connection between the mutation and whatever eventual effects and possible benefits it might have for the organism. “Mutations are random” doesn’t refer to the causes of the mutations, I don’t think.

  59. 59
    ET says:

    gpuccio:

    Most mutations are random. There can be no doubt about that.

    I doubt it. I would say most are directed and only some are happenstance occurrences. See Spetner, “Not By Chance”,1997. Also Shapiro, “Evolution: a view from the 21st Century”. And:

    He [the Designer] indeed seems to have “carefully crafted” information in His species giving them the ability to respond to environmental stimuli to alter their own genome to adapt to new environments. He then evidently let them wander where they will with the ability to adapt.- Dr. Lee Spetner “the Evolution Revolution” p 108

    Just think about it- a Designer went through all of the trouble to produce various living organisms and place them on a changing planet in a changing universe. But the Designer is then going to leave it mostly to chance how those organisms cope with the changes?

    It just makes more sense that organisms were intelligently designed with the ability to adapt and evolve, albeit with genetic entropy creeping in.

  60. 60
    ET says:

    “Mutations are random” means they are accidents, errors and mistakes. They were not planned and just happened to happen due to the nature of the process. Yes, x-rays may have caused the damage that produced the errors but the changes were spontaneous and unpredictable as to which DNA sequences, if any, would have been affected.

  61. 61
    bornagain77 says:

    Excellent point at 59 ET. Isn’t Spetner’s model called the “Non-Random’ Evolutionary hypothesis?

    Spetner goes through many examples of non-random evolutionary changes that cannot be explained in a Darwinian framework.
    https://evolutionnews.org/2014/10/the_evolution_r/

    Gloves Off — Responding to David Levin on the Nonrandom Evolutionary Hypothesis
    Lee M. Spetner
    September 26, 2016
    In the book, I present my nonrandom evolutionary hypothesis (NREH) that accounts for all the evolution that has been actually observed and which is not accounted for by modern evolutionary theory (the Modern Synthesis, or MS). Levin ridicules the NREH but does not refute it. There is too much evidence for it. A lot of evidence is cited in the book, and there is considerably more that I could add. He ridicules what he cannot refute.
    Levin calls the NREH Lamarckian. But it differs significantly from Lamarkism. Lamarck taught that an animal acquired a new capability — either an organ or a modification thereof — if it had a need for it. He offered, however, no mechanism for that capability. Because Lamarck’s theory lacked a mechanism, the scientific community did not accept it. The NREH, on the other hand, teaches that the organism has an endogenous mechanism that responds to environmental stress with the activation of a transposable genetic element and often leads to an adaptive response. How this mechanism arose is obscure at present, but its operation has been verified in many species.,,,
    https://evolutionnews.org/2016/09/gloves_off_-_r/

  62. 62
    ET says:

    Thank you, bornagain77. And yes- the non-random evolutionary hypothesis featuring built-in responses to environmental cues.

  63. 63
    gpuccio says:

    Hazel:

    In a strict sense, random is a system where the events cannot be anticipated by a definite law, but can be reasonably described by a probability distribution.

    Of course, it is absolutely true that in that case “there is no causal connection between the mutation and whatever eventual effects and possible benefits it might have for the organism”. I would describe that aspect saying that the system, as a whole, is blind to those results.

    Randomness is a concept linked to our way of describing the system. Random systems, like the tossing of a coin, are in essence deterministic, but we have no way to describe them in a deterministic way.

    The only exception could be the intrinsic randomness of the wave function collapse in quantum mechanics. In the interpretations where it is really considered intrinsic.

  64. 64
    gpuccio says:

    ET:

    “I doubt it. I would say most are directed and only some are happenstance occurrences”.

    I beg to differ. Most mutations that we observe, maybe all, are random.

    Of course, if the functional information we observe in organisms was generated by mutations, those mutations were probably guided. But we cannot observe that process directly, or at least I am not aware that it has been observed.

    Instead, we observe a lot of more or less spontaneous mutations that are really random. Many of them generate diseases, often in real time.

    Radiation and toxic substances dramatically increase the rate of random mutations, and the frequency of certain diseases or malformations. We know that very well. And yet, no law can anticipate when and how those mutations will happen. We just know that they are more common. The system is still probabilistic, even if we can detect the effect of specific causes.

    I don’t know Spetner in detail, but it seems that he believes that most functional information derives from some intelligent adaptation of existing organisms.

    Again, I beg to differ. It is certainly true that “all the evolution that has been actually observed and which is not accounted for by modern evolutionary theory” needs some explanation, but the explanation is active design, not adaptation.

    I am not saying that adaptation does not exist, or does not have some important role. We can see good examples, for example in bacteria (the plasmid system, just to mention one instance).

    Of course a complex algorithm can generate some new information by computing new data that come from the environment. but the ability to adapt depends on the specific functional information that is already in the system, and has therefore very strict limitations.

    Adaptation can never generate a lot of new original functional information.

    Let’s make a simple example. ATP synthase, again.

    There is no adaptation system in bacteria that could have found the specific sequences of tha many complex components of the system. It is completely out of discussion.

    And yet, ATP synthase exists in bacteria from billion of years, and is still by far similar in humans.

    This is of course the result of design, not adaptation. The same can be said for body plans, all complex protein networks, and I agree with Behe that families of organisms are already levels of complexity that scream design. Adaptation, even for an already complex organism, cannot in any way explain those things.

    It is true that the mutations we observe are practically always random. It is true that they are often deleterious, or neutral. More often neutral or quasi neutral. We know that. We see those mutations happen all the time.

    Achondroplasia, for example, which is the most common cause of dwarfism, is a genetic disease that (I quote from Wikipedia for simplicity):

    “is due to a mutation in the fibroblast growth factor receptor 3 (FGFR3) gene.[3] In about 80% of cases this occurs as a new mutation during early development.[3] In the other cases it is inherited from one’s parents in an autosomal dominant manner.”

    IOWs, in 80% of cases the disease is due to a new mutation, one that was not present in the parents.

    If you look at the Exac site:

    http://exac.broadinstitute.org/

    you will find the biggest database of variations in the human genome.

    Random mutations that generate neutral variation are facts. They can be observed, their rate can be measured with some precision. There is absolutely no scientific reason to deny that.

    So, to sum up:

    a) The mutations we observe every day are random, often neutral, sometimes deleterious.

    b) The few cases where those mutations generate some advantage, as well argued by Behe, are cases of loss of information in complex structures that, by chance, confers some advantage in specific environments. see antibiotic resistance. All those variations are simple. None of them generates any complex functional information.

    c) The few cases of adaptation by some active mechanism that are in some way documented are very simple too. Nylonase, for example, could be one of them. The ability of viruses to change at very high rates could be another one.

    d) None of those reasonings can help explain the appearance, throughout natural history, of new complex functional information, in the form of new functional proteins and protein networks, new body plans, new functions, new regulations. None of those reasonings can explain OOL, or eukaryogenesis, or the transition to vertebrates. None of them can even start to explain ATP synthase, ot the immune system, or the nervous system in mammals. And so on, and so on.

    e) All these things can only be explained by active design.

    This is my position. This is what I firmly believe.

    That said, if you want, we can leave it to that.

  65. 65
    OLV says:

    GP @52:

    ” the levels of regulation and crosstalk of this NF-kB system grow each time I make a Pubmed search”

    Are you surprised? 🙂

    This crosstalk concept is very interesting indeed.

  66. 66
    gpuccio says:

    OLV:

    “Are you surprised?”

    No. 🙂

    But, of course, self-organization can easily explain all that! 🙂

  67. 67
    gpuccio says:

    OLV and all:

    This is another paper about lncRNAs and NF-kB:

    Long non-coding RNA: a versatile regulator of the nuclear factor-kB signalling circuit

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5343356/

    This is open access.

    SUMMARY

    The nuclear factor-kB (NF-kB) family of transcription factors play an essential role for the regulation of inflammatory responses, immune function and malignant transformation. Aberrant activity of this signalling pathway may lead to inflammation, autoimmune diseases and oncogenesis. Over the last two decades great progress has been made in the understanding of NF-kB activation and how the response is counteracted for maintaining tissue homeostasis. Therapeutic targeting of this pathway has largely remained ineffective due to the widespread role of this vital pathway and the lack of specificity of the therapies currently available. Besides regulatory proteins and microRNAs, long non?coding RNA (lncRNA) is emerging as another critical layer of the intricate modulatory architecture for the control of the NF-kB signalling circuit. In this paper we focus on recent progress concerning lncRNA-mediated modulation of the NF-kB pathway, and evaluate the potential therapeutic uses and challenges of using lncRNAs that regulate NF-kB activity.

  68. 68
    gpuccio says:

    OLV and all:

    Here is a database of known human lncRNAs:

    https://lncipedia.org/

    It includes, at present, data for 127,802 transcripts and 56,946 genes. A joy for the fans of junk DNA! 🙂

    Let’s look at one of these strange objects.

    MALAT-1 is one of the lncRNAs described in the paper at the previous post. Here is what the paper says:

    MALAT1
    Metastasis-associated lung adenocarcinoma transcript 1 (MALAT1) is a highly conserved lncRNA whose abnormal expression is considered to correlate with the development, progression and metastasis of multiple cancer types. Recently we reported the role of MALAT1 in regulating the production of cytokines in macrophages. Using PMA-differentiated macrophages derived from the human THP1 monocyte cell line, we showed that following stimulation with LPS, a ligand for the innate pattern recognition receptor TLR4, MALAT1 expression is increased in an NF-kB-dependent manner. In the nucleus, MALAT1 interacts with both p65 and p50 to suppress their DNA binding activity and consequently attenuates the expression of two NF-kB-responsive genes, TNF-a and IL-6. This finding is in agreement with a report based on in silico analysis predicting that MALAT1 could influence NF-kB/RelA activity in the context of epithelial–mesenchymal transition. Therefore, in LPS-activated macrophages MALAT1 is engaged in the tight control of the inflammatory response through interacting with NF-kB, demonstrating for the first time its role in regulating innate immunity-mediated inflammation. As MALAT1 is capable of binding hundreds of active chromatin sites throughout the human genome, the function and mechanism of action so far uncovered for this evolutionarily conserved lncRNA may be just the tip of an iceberg.

    Emphasis mine, as usual.

    Now, if we look for MALAT-1 in the database above linked, we find 52 transcripts. The first one, MALAT1:1, has a size of 12819 nucleotides. Not bad! 🙂

    342 papers quoted about this one transcript.

  69. 69
    bornagain77 says:

    Gp adamantly states,

    I beg to differ. Most mutations that we observe, maybe all, are random.

    And yet Shapiro adamantly begs to differ,,,

    “It is difficult (if not impossible) to find a genome change operator that is truly random in its action within the DNA of the cell where it works. All careful studies of mutagenesis find statistically significant non-random patterns”
    James Shapiro – Evolution: A View From The 21st Century – (Page 82)

    Noble also begs to differ

    Physiology is rocking the foundations of evolutionary biology – Denis Noble – 17 MAY 2013
    Excerpt: The ‘Modern Synthesis’ (Neo-Darwinism) is a mid-20th century gene-centric view of evolution, based on random mutations accumulating to produce gradual change through natural selection.,,, We now know that genetic change is far from random and often not gradual.,,,
    http://onlinelibrary.wiley.com.....4/abstract
    – Denis Noble – President of the International Union of Physiological Sciences

    Richard Sternberg also begs to differ

    Discovering Signs in the Genome by Thinking Outside the BioLogos Box – Richard Sternberg – March 17, 2010
    Excerpt: The scale on the x-axis is the same as that of the previous graph–it is the same 110,000,000 genetic letters of rat chromosome 10. The scale on the y-axis is different, with the red line in this figure corresponding to the distribution of rat-specific SINEs in the rat genome (i.e., ID sequences). The green line in this figure, however, corresponds to the pattern of B1s, B2s, and B4s in the mouse genome….
    *The strongest correlation between mouse and rat genomes is SINE linear patterning.
    *Though these SINE families have no sequence similarities, their placements are conserved.
    *And they are concentrated in protein-coding genes.,,,
    ,,, instead of finding nothing but disorder along our chromosomes, we are finding instead a high degree of order.
    Is this an anomaly? No. As I’ll discuss later, we see a similar pattern when we compare the linear positioning of human Alus with mouse SINEs. Is there an explanation? Yes. But to discover it, you have to think outside the BioLogos box.
    http://www.evolutionnews.org/2.....32961.html

    Beginning to Decipher the SINE Signal – Richard Sternberg – March 18, 2010
    Excerpt: So for a pure neutralist model to account for the graphs we have seen, ~300,000 random mutation events in the mouse have to match, somehow, the ~300,000 random mutation events in the rat.
    What are the odds of that?
    http://www.evolutionnews.org/2.....32981.html

    Another paper along that line,

    Recent comprehensive sequence analysis of the maize genome now permits detailed discovery and description of all transposable elements (TEs) in this complex nuclear environment. . . .
    The majority, perhaps all, of the investigated retroelement families exhibited non-random dispersal across the maize genome, with LINEs, SINEs, and many low-copy-number LTR retrotransposons exhibiting a bias for accumulation in gene-rich regions.
    http://journals.plos.org/plosg.....en.1000732

    and another paper

    PLOS Paper Admits To Nonrandom Mutation In Evolution – May 31, 2019
    Abstract: “Mutations drive evolution and were assumed to occur by chance: constantly, gradually, roughly uniformly in genomes, and without regard to environmental inputs, but this view is being revised by discoveries of molecular mechanisms of mutation in bacteria, now translated across the tree of life. These mechanisms reveal a picture of highly regulated mutagenesis, up-regulated temporally by stress responses and activated when cells/organisms are maladapted to their environments—when stressed—potentially accelerating adaptation. Mutation is also nonrandom in genomic space, with multiple simultaneous mutations falling in local clusters, which may allow concerted evolution—the multiple changes needed to adapt protein functions and protein machines encoded by linked genes. Molecular mechanisms of stress-inducible mutation change ideas about evolution and suggest different ways to model and address cancer development, infectious disease, and evolution generally.” (open access) – Fitzgerald DM, Rosenberg SM (2019) What is mutation? A chapter in the series: How microbes “jeopardize”the modern synthesis. PloS Genet 15(4): e1007995.
    https://uncommondescent.com/evolution/plos-paper-admits-to-nonrandom-mutation-in-evolution/

    And as Jonathan Wells noted, “I now know as an embryologist,,,Tissues and cells, as they differentiate, modify their DNA to suit their needs. It’s the organism controlling the DNA, not the DNA controlling the organism.”

    Ask an Embryologist: Genomic Mosaicism – Jonathan Wells – February 23, 2015
    Excerpt: humans have a “few thousand” different cell types. Here is my simple question: Does the DNA sequence in one cell type differ from the sequence in another cell type in the same person?,,,
    The simple answer is: We now know that there is considerable variation in DNA sequences among tissues, and even among cells in the same tissue. It’s called genomic mosaicism.
    In the early days of developmental genetics, some people thought that parts of the embryo became different from each other because they acquired different pieces of the DNA from the fertilized egg. That theory was abandoned,,,
    ,,,(then) “genomic equivalence” — the idea that all the cells of an organism (with a few exceptions, such as cells of the immune system) contain the same DNA — became the accepted view.
    I taught genomic equivalence for many years. A few years ago, however, everything changed. With the development of more sophisticated techniques and the sampling of more tissues and cells, it became clear that genetic mosaicism is common.
    I now know as an embryologist,,,Tissues and cells, as they differentiate, modify their DNA to suit their needs. It’s the organism controlling the DNA, not the DNA controlling the organism.
    http://www.evolutionnews.org/2.....93851.html

    And as ET pointed out, Gp’s presupposition also makes no sense theologically speaking

    Just think about it- a Designer went through all of the trouble to produce various living organisms and place them on a changing planet in a changing universe. But the Designer is then going to leave it mostly to chance how those organisms cope with the changes?
    It just makes more sense that organisms were intelligently designed with the ability to adapt and evolve, albeit with genetic entropy creeping in.

  70. 70
    ET says:

    Evolution by means of intelligent design is active design. Genetic changes don’t have to produce some perceived advantage in order to be directed. And if genetic entropy has interfered with the directed mutation function then that could also explain what you observe.

    And yes, ATP synthase was definitely intelligently designed. Why can’t it be that it was intelligently designed via some sort of real genetic algorithm?

  71. 71
    ET says:

    And those polar bears. The change in the structure of the fur didn’t happen by chance. So either the original population(s) of bears already had that variation or the information required to produce it. With that information being teased out due to the environmental changes and built-in responses to environmental cues.

  72. 72

    .
    Another excellent post GP, thank you for writing it. Reading thru it now.

    Once again, where are your anti-ID critics?

  73. 73
    gpuccio says:

    Upright BiPed:

    Hi UB, nice to hear from you! 🙂

    “Once again, where are your anti-ID critics?”

    As usual, they seem to have other interests. 🙂

    Luckily, some friends are ready to be fiercely antagonistic! 🙂 Which is good, I suppose…

  74. 74
    gpuccio says:

    ET at #70:

    Evolution by means of intelligent design is active design.

    Yes, it is.

    Genetic changes don’t have to produce some perceived advantage in order to be directed.

    Of course. That’s exactly my point. See my post #43, this statement about my model (modeol b):

    “There is no need for functional intermediates in the fossil record or in the genomes. What happens in the lab does not leave traces. We do not need big intermediate populations to be expanded by positive NS, to gain new huge probabilistic resources (as in model a). We just need a few samples, a few intermediates, in a limited time and space. There is no reason to expect any relevant trace from that process.”

    Emphasis added.

    And if genetic entropy has interfered with the directed mutation function then that could also explain what you observe.

    In my model, it does. You see, for anything to explain the differences created in time by neutral variation (my point 1 at post #43, what I call “signatures of neutral variation in the conserved sequences, grossly proportional to the evolutionary time split”), you definitely need physical continuity between different organisms. Otherwise, nothing can be explained. IOWs, neutral signatures accumulate as differences as time goes on, between there is physical continuity. Creation or design form scratch for each organism cannot explain that. This is the argument that BA seems not to understand.

    And yes, ATP synthase was definitely intelligently designed.

    Definitely.

    Why can’t it be that it was intelligently designed via some sort of real genetic algorithm?

    Because, of course, the algorithm would be by far more complex than the result. And where is that algorithm? there is absolutely no trace of it.

    It is no good to explain things with mere imagination, We need facts.

    Look, we are dealing with functional information here, not with some kind of pseudo-order that can be generated by some simple necessity laws coupled to random components. IOWs, this is not something that self-organization can even start to do.

    Of course, an algorithm could do it. If I had a super-computer already programmed with all possible knowledge about ciochemistry, and the computing ability to anticipate top down how protein sequences will fold and what biochemical activity they will have, and with a definite plan to look for some outcome that can transform a proton gradient into ATP, possibly with at least a strong starting plan that it should be something like a water mill, then yes, maybe that super-computer could be, in time, elaborate some relatively efiicient project on that basis. Of course, that whole apparatus would be much more complex than what we want to obtain. After all, ATP synthase has only a few thousand bits of functional information. Here we are discussing probably many gigabytes for the algorithm.

    That’s the problem, in the end. Functional information can be generated only in two was:

    a) Direct design by a consious, intelligen, purposeful agent. Of course that agent may have to use previous data or knowledge, but the point is that its cognitive abilities and its ability to have purposes will create those shortcuts that no non design system can generate.

    b) Indirect design through some designed system complex enough to include a good programming of how to obtain some results. As said, that can work, but it has severe limitaitons. The designed system is already very complex, and the further functional information that can be obtained is usually very limited and simple. Why? Because the system, not being open to a further intervention of conaciousness and intelligence, can only do what it has been progarmmed to do. Nothing else. The purposes are only those purposes that have already been embedded at the beginning. Nothing else.

    The computations, all the apparently “intelligent” activities, are merely passive executions of intelligent programs already designed. They can do what they have been programmed to do, but nothing else.

    So, let’s say that I want to program a system that can find a good solution for ATP-synthase. OK, I can do that (not me, of course, let’s say some very intelligent designer). But I must already be conscious that I will need ATP.synthase, ir something like that. I must put that purpose in my system. And of course all the knowledge and power needed to do what I want it to do.

    Or, of course, I can just design ATP synthase and introduce that design in the system (that I have already designed myself soem time ago) if and when it is needed.

    Which is more probably true?

    Again, facts and only facts must guide us.

    ATP synthase, in a form very similar to what we observe today, was alredy present billion of years ago, when reasonably only prokaryotes were living on our planet.

    Was a complex algorithm capable of that kind of knowledge and computations present on our planet before the appearance of ATP synthase? In what form? What fatcs have we that support such an idea

    The truth is very simple. For all that we can know and reasonably infer, at some time, very early after our plane became compatible with any form of life, ATP synthase appeared, very much similar to what it is today, in some bacterial like form of life. There is nothing to suggest, or support, or even mak credible or reasonable, that any complex algorithm capable of computing the necessary information for it was present at that time. No such algorithm, or any trace of it, exists today. If we wanted to compute ATP synthase today, we would not have the palest idea of how to do it.

    These are the simple facts. Then, anyone is free to believe as he likes. As for me, I stick to my model, and am very happy with it.

  75. 75
    gpuccio says:

    ET at #71:

    As far as I can understand, the divergence of polar bears is probably simple enough to be explained as adaptation under environmental constraints. This is not ATP synthase. Not at all.

    I don’t know the topic well, so mine is just an opinion. However, bears are part of the family Ursidae, so brown bears and polar bears are part of the same family. So, is we stick to Behe’s very reasonable idea that family is probably the level which still requres design, this is an inside family divergence.

  76. 76
    bornagain77 says:

    Gp claims:

    neutral signatures accumulate as differences as time goes on, between there is physical continuity. Creation or design form scratch for each organism cannot explain that. This is the argument that BA seems not to understand.

    To be clear, Gp is arguing for a very peculiar. even bizarre. form of UCD where God reuses stuff and does not create families de novo (which is where Behe now puts the edge of evolution). Hence my reference to Johnny Cask’s song “One Piece at a Time”

    Earlier, Gp also claimed that he could think of no other possible explanation to explain the data. I pointed out that ‘directed’ mutations are another possible explanation. Gp then falsely claimed that there are no such thing as directed mutations. Specifically he claimed, “Most mutations that we observe, maybe all, are random.”

    Gp, whether he accepts it or not, is wrong in his claim that “maybe all mutations are random”. Thus, Gp’s “Johnny Cash” model is far weaker than he imagines it to be.

    JOHNNY CASH – ONE PIECE AT A TIME – CADILLAC VIDEO
    https://www.youtube.com/watch?v=Hb9F2DT8iEQ

  77. 77
    gpuccio says:

    Bornagain77:

    “I note that my model is Theologically modest in that I hold to traditional concepts of the omniscience of God and God creating ‘kinds’ that reproduce after themselves, whereas, humorously, your model is all over the place Theologically speaking.”

    “And as ET pointed out, Gp’s presupposition also makes no sense theologically speaking”

    I have ignored this kind of objection, but as you (and ET) insist, I will say just a few words.

    I believe that you are theologically committed in your discussions about science. This is not a big statement, I suppose, because it is rather obvious in all that you say. And it is not a criticism, believe me. It is your strong choice, and I appreciate people who make strong choices.

    But, of course, I don’t feel obliged to share those choices. You see, I too make my strong choices, and I like to remain loyal to them.

    One of my strong choices is that my philosophy of science (and my theology, too) tell me that my scientific reasonings must not (as far as it is humanly possible) be influenced by my theology. In any way. So, I really strive to achieve that (and it’s not easy).

    This is, for me, an important question of principle. So, I will not answer any argument that makes any reference to theology, or even simply to God, in a scientific discussion. Never.

    So, excuse me if I will go on ignoring that kind of remarks from you or others. It’s not out of discourtesy. It’s to remain loyal to my principles.

  78. 78
    gpuccio says:

    Bornagain77 at #76:

    For “God reusing stuff”, see my previous post.

    For the rest, mutations and similar, see my next post (I need a little time to write it).

  79. 79
    EugeneS says:

    Upright Biped,

    An off-topic. You have mail as of a long time ago 🙂 I apologise for my long silence. I have changed jobs twice and have been quite under stress. Because of this I was not checking my non-business emails regularly. Hoping to get back to normal.

  80. 80
    gpuccio says:

    EugeneS:

    Hi, Eugene,

    Welcome anyway to the discussion, even for an off.topic! 🙂

  81. 81
    bornagain77 says:

    Basically I believe one of Gp’s main flaws in his model is that he believes that the genome is basically static and most all the changes to the genome that do occur are the result of randomness (save for when God intervenes at the family level to introduce ”some’ new information whilst saving parts of the genome that have accumulated changes due to randomness).

    Yet the genome is now known to be dynamic and not to be basically static.

    Neurons constantly rewrite their DNA – Apr. 27, 2015
    Excerpt: They (neurons) use minor “DNA surgeries” to toggle their activity levels all day, every day.,,,
    “We used to think that once a cell reaches full maturation, its DNA is totally stable, including the molecular tags attached to it to control its genes and maintain the cell’s identity,” says Hongjun Song, Ph.D.,, “This research shows that some cells actually alter their DNA all the time, just to perform everyday functions.”,,,
    ,,, recent studies had turned up evidence that mammals’ brains exhibit highly dynamic DNA modification activity—more than in any other area of the body,,,
    http://medicalxpress.com/news/.....e-dna.html

    A Key Evidence for Evolution Involving Mobile Genetic Elements Continues to Crumble – Cornelius Hunter – July 13, 2014
    Excerpt: The biological roles of these place-jumping, repetitive elements are mysterious.
    They are largely viewed (by Darwinists) as “genomic parasites,” but in this study, researchers found the mobile DNA can provide genetic novelties recruited as certain population-unique, functional enrichments that are nonrandom and purposeful.
    “The first shocker was the sheer volume of genetic variation due to the dynamics of mobile elements, including coding and regulatory genomic regions, and the second was amount of population-specific insertions of transposable DNA elements,” Michalak said. “Roughly 50 percent of the insertions were population unique.”
    http://darwins-god.blogspot.co.....lving.html

    Contrary to expectations, genes are constantly rearranged by cells – July 7, 2017
    Excerpt: Contrary to expectations, this latest study reveals that each gene doesn’t have an ideal location in the cell nucleus. Instead, genes are always on the move. Published in the journal Nature, researchers examined the organisation of genes in stem cells from mice. They revealed that these cells continually remix their genes, changing their positions as they progress though different stages.
    https://uncommondescent.com/intelligent-design/researchers-contrary-to-expectations-genes-are-constantly-rearranged-by-cells/

    And again, DNA is now, contrary to what is termed to be ‘the central dogma’, far more passive than it was originally thought to be. As Denis Noble stated, “The genome is an ‘organ of the cell’, not its dictator”

    “The genome is an ‘organ of the cell’, not its dictator”
    – Denis Noble – President of the International Union of Physiological Sciences

    Another main flaw in Gp’s ‘Johnny Cash model’, and as has been pointed out already, is that he assumes ‘randomness’ to be a defining notion for changes to the genome. This is the same assumption that Darwinists make. In fact, Darwinists. on top of that, also falsely assume ‘random thermodynamic jostling’ to be a defining attribute of the actions within a cell.

    Yet, advances in quantum biology have now overturned that foundational assumption of Darwinists, The first part of the following video recalls an incident where ‘Harvard Biovisions’ tried to invoke ‘random thermodynamic jostling’ within the cell to undermine the design inference. (i.e. the actions of the cell, due to advances in quantum biology, are now known to be far more resistant to ‘random background noise’ than Darwinists had originally presupposed:)

    Darwinian Materialism vs. Quantum Biology – Part II – video
    https://www.youtube.com/watch?v=oSig2CsjKbg

    Of supplemental note:

    How Quantum Mechanics and Consciousness Correlate – video (how quantum information theory and molecular biology correlate – 27 minute mark)
    https://youtu.be/4f0hL3Nrdas?t=1634

  82. 82
    bornagain77 says:

    Gp in 77 tried to imply he was completely theologically neutral. That is impossible. Besides science itself being impossible without basic Theological presuppositions (about the rational intelligibility of the universe and of our minds to comprehend it), any discussion of origins necessarily entails Theological overtones. It simply can’t be avoided. Gp is trying to play politics instead of being honest. Perhaps next GP will try to claim that he is completely neutral in regards to breathing air. 🙂

  83. 83
    ET says:

    gpuccio:

    Because, of course, the algorithm would be by far more complex than the result. And where is that algorithm? there is absolutely no trace of it.

    Yes, the algorithm would be more complex than the structure. So what? Where is the algorithm? With the Intelligent Designer. A trace of it is in the structure itself.

    The algorithm attempts to answer the question of how ATP synthase was intelligently designed. Of course an omnipotent intelligent designer wouldn’t require that and could just design one from its mind.

  84. 84
    gpuccio says:

    Bornagain77 at #69 and #76 (and to all):

    OK, so some people apparently disagree with me. I will try to survive.

    But I would insist on the “apparently”, because again, IMO, you make some confusion in your quotes and their intepretation.

    Let’s see. At #69, you make 6 quote (excluding the internal reference to ET):

    1. Shapiro.

    I don’t think I can comment on this one. The quote is too short, and I have not the book to check the context. However, the reference to “genome change operator” is not very clear. Moreover, the reference to “statistically significant non-random patterns” could simply point to some necessity effect that modifies the probability distribution, like in the case of the loaded dice. As explained, that does not make the system “non-random”. And that has nothing to do with guidance, design or creation.

    2. Noble.
    That “genetic change is far from random and often not gradual” is obvious. It is not random because it is designed, and it is well known that it is not gradual. I perfectly agree. That has nothing to do with random mutations, because design is of course not implemented by random mutations. This is simply a criticism of model a.

    Another point is that some epigenetic modification can be inherited. Again, I have nothing against that. But of course I don’t believe that such a mechanism can create complex functional information and body plans. Neither do you, I believe. You say you believe in the “creation of kinds”.

    3. and 4. Stermberg and the PLOS paper.

    These are about transposons. I will address this topic specifically at the end of this post.

    5. The other PLOS paper.

    Here is the abstract:

    Abstract
    Mutations drive evolution and were assumed to occur by chance: constantly, gradually, roughly uniformly in genomes, and without regard to environmental inputs, but this view is being revised by discoveries of molecular mechanisms of mutation in bacteria, now translated across the tree of life. These mechanisms reveal a picture of highly regulated mutagenesis, up-regulated temporally by stress responses and activated when cells/organisms are maladapted to their environments—when stressed—potentially accelerating adaptation. Mutation is also nonrandom in genomic space, with multiple simultaneous mutations falling in local clusters, which may allow concerted evolution—the multiple changes needed to adapt protein functions and protein machines encoded by linked genes. Molecular mechanisms of stress-inducible mutation change ideas about evolution and suggest different ways to model and address cancer development, infectious disease, and evolution generally.

    This is simple. The paper, again, uses the term “random” and “not random” incorrectly. It is obvious in the first phrase. The authors complain that mutations do not occur “roughly uniformly” in the genome, and that would make them not random. But, as explained, the uniform distribution is only one of the many probability distributions that describe well natural phenomena. For example, many natural systems are well described, as well known, by a normal distribution, which has nothing to do with an uniform distribution. That does not mean that they are not random systems.

    The criticism to graduality I have already discussed: I obviously agree, but the only reason for non gradual variation is design. Indeed, neutral mutations are instead gradual, because they are not designed.

    And what’s the problem with “environmental inputs”? We know very well that environmental inputs change the rate, and often the type, of mutation. Radiations, for example, do that. We have known that for decades. That is no reason to say that mutations are not random. They are random, and environmental inputs do modify the probability distribution. A lot. Are these authors really discovering, in 2019, that a lor of leukemias were caused by the bomb in Hiroshima?

    6. Wells.

    He is discussing the interesting concept of somatic genomic variation.

    Here is the abstract of the paper to which he refers:

    Genetic variation between individuals has been extensively investigated, but differences between tissues within individuals are far less understood. It is commonly assumed that all healthy cells that arise from the same zygote possess the same genomic content, with a few known exceptions in the immune system and germ line. However, a growing body of evidence shows that genomic variation exists between differentiated tissues. We investigated the scope of somatic genomic variation between tissues within humans. Analysis of copy number variation by high-resolution array-comparative genomic hybridization in diverse tissues from six unrelated subjects reveals a significant number of intra-individual genomic changes between tissues. Many (79%) of these events affect genes. Our results have important consequences for understanding normal genetic and phenotypic variation within individuals, and they have significant implications for both the etiology of genetic diseases such as cancer and for immortalized cell lines that might be used in research and therapeutics.

    As you can see (if you can read that abstract impartially) the paper does not mention in any way anything that supports Wells’final (and rather gratuitous) statement:

    “From what I now know as an embryologist I would say that the truth is the opposite: Tissues and cells, as they differentiate, modify their DNA to suit their needs. It’s the organism controlling the DNA, not the DNA controlling the organism.”

    Indeed, the paper says the opposit: that somatic genomic variations are important to better understand “the etiology of genetic diseases such as cancer”. Why? The reason is simple: because they are random mutations, often deleterious.

    Ah, and by the way: of course somatic mutattions cannot be inherited, and therefore have no role in building the functional inforamtion in organisms.

    So, as you can see (but will not see) you are making a lot of confusion with your quotations.

    The only interesting topic is transposons. But it’s late, so I will discuss that topic later, in next post.

  85. 85
    gpuccio says:

    Bornagain77 at #82:

    Gp in 77 tried to imply he was completely theologically neutral. That is impossible.

    Emphasis mine.

    That’s unfair and not true.

    I quote myself at #77:

    “One of my strong choices is that my philosophy of science (and my theology, too) tell me that my scientific reasonings must not (as far as it is humanly possible) be influenced by my theology. In any way. So, I really strive to achieve that (and it’s not easy).”

    No comments.

    You see, the difference between your position and my position is that you are very happy to derive your scientific ideas from your theology. I try as much as possible not to do that.

    As said, both are strong choices. And I respect choices. But that’s probably one of the reasons why we cannot really communicate constructively about scientific things.

  86. 86
    gpuccio says:

    ET at #83:

    “Yes, the algorithm would be more complex than the structure. ”

    OK.

    “So what? Where is the algorithm? With the Intelligent Designer. ”

    ??? What do you mean? I really don’t understand.

    “A trace of it is in the structure itself.”

    The structure aloows us to infer design. I don’t see what in the structure points to some specific algorithm. Can you help?

    “The algorithm attempts to answer the question of how ATP synthase was intelligently designed. ”

    OK, I am not saying that the designer did not use any algorithm. Maybe the designer is there in his lab, and has a lot of computers working fot him in the process. But:

    a) He probably designed the computers too

    b) His conscious cognition is absolutely necessary to reach the results. Computers do the computations, but it’s consciousness that defines puproses, and finds strategies.

    And however, design happens when the functional information is inputted into the material object we observe. So, if the designer inputs information after having computed it in his lab. that is not really relevant.

    I though that your mention of an algorithm meant something different. I thought you meant that the designer designs an algorithm and put it in some existing organism (or place), and tha such algorithm them compute ATP synthase or what else. So, if that is your idea, again I ask: what facts support the existence of such an independent physical algorithm in physical reality?

    The answer is simple enough: none at all.

    ” Of course an omnipotent intelligent designer wouldn’t require that and could just design one from its mind.”

    I have no idea if the biological designer is omnipotent, or if he designs things from his mind alone, or if he uses computers or watches or anything else in the process. I only know that he designs biological things, and must be conscious, intelligent and purposeful.

  87. 87
    bornagain77 says:

    Gp 77 and 85 disingenuously claims that he is the one being ‘scientific’ while trying, as best he can, to keep God out of his science. Hogwash! His model specifically makes claims as to what he believes the designer, i.e. God, is and is not doing. i.e. Johnny Cash’s ‘One Piece at a Time”.

    Perhaps Gp falsely believes that if he compromises his theology enough that he is somehow being more scientific than I am? Again Hogwash. As I have pointed out many times, assuming Methodologcal Naturalism as a starting assumption, (as Gp seems bent on doing in his model as far as he can do it without invoking God), results in the catastrophic epistemological failure of science itself. (See bottom of post for refutation of methodological naturalism)

    Bottom line, Gp, instead of being more scientific than I, as he is falsely trying to imply (much like Darwinists constantly try to falsely imply), has instead produced a compromised, bizarre, and convoluted, model. A model that IMHO does not stand up to even minimal scrutiny. And a model that no self respecting Theist or even Darwinist would ever accept as being true. A model that, as far as I can tell, apparently only Gp himself accepts as being undeniably true..

    As I have pointed out several times now, assuming Naturalism instead of Theism as the worldview on which all of science is based leads to the catastrophic epistemological failure of science itself.

    Basically, because of reductive materialism (and/or methodological naturalism), the atheistic materialist is forced to claim that he is merely a ‘neuronal illusion’ (Coyne, Dennett, etc..), who has the illusion of free will (Harris), who has unreliable beliefs about reality (Plantinga), who has illusory perceptions of reality (Hoffman), who, since he has no real time empirical evidence substantiating his grandiose claims, must make up illusory “just so stories” with the illusory, and impotent, ‘designer substitute’ of natural selection (Behe, Gould, Sternberg), so as to ‘explain away’ the appearance (i.e. illusion) of design (Crick, Dawkins), and who must make up illusory meanings and purposes for his life since the reality of the nihilism inherent in his atheistic worldview is too much for him to bear (Weikart), and who must also hold morality to be subjective and illusory since he has rejected God (Craig, Kreeft).
    Bottom line, nothing is real in the atheist’s worldview, least of all, morality, meaning and purposes for life.,,,
    – Darwin’s Theory vs Falsification – video – 39:45 minute mark
    https://youtu.be/8rzw0JkuKuQ?t=2387

    Thus, although the Darwinist may firmly believes he is on the terra firma of science (in his appeal, even demand, for methodological naturalism), the fact of the matter is that, when examining the details of his materialistic/naturalistic worldview, it is found that Darwinists/Atheists are adrift in an ocean of fantasy and imagination with no discernible anchor for reality to grab on to.

    It would be hard to fathom a worldview more antagonistic to modern science than Atheistic materialism and/or methodological naturalism have turned out to be.

    2 Corinthians 10:5
    Casting down imaginations, and every high thing that exalteth itself against the knowledge of God, and bringing into captivity every thought to the obedience of Christ;

  88. 88
    bornagain77 says:

    Gp has, in a couple of instances now, tried to imply that I (and others) do not understand randomness. In regards to Shapiro Gp states,

    Moreover, the reference to “statistically significant non-random patterns” could simply point to some necessity effect that modifies the probability distribution, like in the case of the loaded dice. As explained, that does not make the system “non-random”. And that has nothing to do with guidance, design or creation.

    Might I suggest that it is Gp himself that does not understand randomness. As far as I can tell, Gp presupposes complete randomness within his model, (completely free from ‘loaded dice’), and is one of the main reasons that he states that he can think of no “other possible explanation” to explain the sequence data.. Yet, if ‘loaded dice’ are producing “statistically significant non-random patterns” within genomes then that, of course, falsifies Gp’s assumption of complete randomness in his model. Like I stated before ‘directed’ mutations, (and/or ‘loaded dice’ to use Gp’s term), are ‘another possible explanation’ that I can think of.

  89. 89
    gpuccio says:

    Bornagain77:

    OK, I think I will leave it at that with you. Even if you don’t.

  90. 90
    gpuccio says:

    To all:

    Of course, I will make the clarifications about transposons as soon as possible.

  91. 91
    john_a_designer says:

    Once again (along with others) thank you for a very interesting and evocative OP. On the other hand, as a mild criticism, I am just an uneducated layman when it comes to bio-chemistry so I am continuously trying to get up to speed on the topic. I think I get the gist of what you are saying but I imagine someone stumbling onto this site for the first time are going to find this topic way over their heads. Maybe something of a basic summary which briefly explains transcription, the role of RNA polymerase and the difference between prokaryotic and eukaryotic transcription would be helpful (or a link to such a summary if you done that somewhere else.)

    As for myself I think I get the gist of what you are saying but I am a little confused by differences between prokaryotic and eukaryotic transcription. (Most of my study and research has been centered on the prokaryote. If you can’t explain the natural selection + random variation evolution in prokaryotes it’s game over for Neo-Darwinism. There has to be another explanation.) For example, one question I have is, are there transcription factors for prokaryotes? According to Google, no.

    Eukaryotes have three types of RNA polymerases, I, II, and III, and prokaryotes only have one type. Eukaryotes form and initiation complex with the various transcription factors that dissociate after initiation is completed. There is no such structure seen in prokaryotes.

    Is that true? What about the Sigma factor which initiates transcription in prokaryotes and the Rho factor which terminates it? Isn’t that essentially what transcription factors which come in two forms, promoters and repressors, do in eukaryotic transcription? Are Sigma factors and Rho factors the same in all prokaryotes or is there a species difference?

    As far a termination in eukaryotes, one educational video I ran across recently (it’s dated to 2013) said that it is still unclear how termination occurs in eukaryotes. Is that true? In prokaryote there are two ways transcription is terminated: there is Rho dependent, where the Rho factor is utilized, and Rho independent, where it isn’t. Do we know anymore six years later?

    Hopefully answering those kinds of questions can help me and others. (Of course, they’re going to have to do some homework on their own.)

  92. 92
    bill cole says:

    Hi gpuccio
    Thanks for the interesting post. From my study cell control comes from the availability of transcription acting molecules in the nucleus. They can be either proteins or small molecules that are not transcribed but obtained by other sources like enzyme chains. Testosterone and estrogen are examples of non transcribed small molecules. How this is all coordinated so that a living organism can reliably operate is fascinating and I am thrilled to see you start this discussion. Great to have you back 🙂

  93. 93
    gpuccio says:

    John_a_designer:

    Thank you for your very thoughtful comment.

    Yes, in this OP and in others I have dealed mainly with eukaryotes. But of course you are right, prokaryotes are equally fascinating, maybe only a little bit simpler, and, as you say:

    “If you can’t explain the natural selection + random variation evolution in prokaryotes it’s game over for Neo-Darwinism. There has to be another explanation”.

    And game over it is, because the functional complexity in prokaryotes ia already overwhelming, and can never be explained by RV + NS.

    It’s not a case that the example I use probably most frequently is ATP synthase. And that is a bacterial protein.

    You describe very correctly the transcription system in prokaryotes. It’s certainly much simpler than in eukaryotes, but still ots complexity is mind-boggling.

    I think the system of TFs is essentially eukaryotic, but of course a strict regulation is present in prokaryotes too, You mention sigma factors and rho, of course, and there is the system of activators and repressors. But there are big differences, starting form the very different organization of the bacterial chromosome (histone independent supercoiling, and so on).

    Sigma factors are in some way the equivalent of generic TFs. According to Wikipedia, sigma factor “is a bacterial transcription initiation factor that enables specific binding of RNA polymerase (RNAP) to gene promoters. It is homologous to archaeal transcription factor B and to eukaryotic factor TFIIB”.

    Maybe. I have blasted sigma 70 from E. coli with human TFIIB and found no detectable homology (E value 1.4). So, there seems to be little conservation here.

    I have blasted the same E. coli sigma 70 against all bacteria, excluding proteobacteria (the phylum of E. coli). I would say that there is good conservarion in different types of bacteria, such as up to 1251 bits in firmicutes, 786 bits in actinobacteria, 533 bits in cyanobacteria, and so on. So, this molecule seems to be rather conserved in bacteria.

    I think that eukaryogenesis is one of the most astounding designed jumps in natural history. I do accept that mithocondria and plastids are derived from bacteria, and that some important eukaryotic features are mainly derived from archae, but even those partial derivations require tons of designed adjustments. And that is only the tip of the iceberg. Most eukaryotic features (the nuclear membrane and nuclear pore, chromatin organization, the system of TFs, the spliceosome, the ubiquitin system, and so on) are essentially eikaryotic, even if of course some vague precursor can be detected, in many cases, in prokaryotes. And each of these system is a marvel of original design.

  94. 94
    gpuccio says:

    Bill Cole:

    Great to hear from you! 🙂

    And let’s not forget lncRNAs (see comments #52, #67 and #68 here).

  95. 95
    Silver Asiatic says:

    GP

    design happens when the functional information is inputted into the material object we observe

    I don’t quite follow that. We create software that evaluates data and then produces functional information (visualizations). So, the design of that software happened when the visualization occurred? I think we normally say that the design occurred first in the mind of the software designer – it’s an idea (Mozart wrote symphonies entirely in his mind before putting on paper). Then, the designer creates algorithms that produce functional information. But the software is not the designer. It is the output of a designed process.

    what facts support the existence of such an independent physical algorithm in physical reality?

    Again, with Mozart. The orchestra plays the symphony. Does this mean that the symphony could only be created as an independent physical text in physical reality? The facts say no – he had it in his mind.

    I believe you are saying that a Designer enters into the world at various specific points of time, and intervenes in the life of organisms and creates mutations or functions at those moments. What facts support the existence of those interventions in time, versus the idea that the organism was designed with the capability and plan for various changes from the beginning of the universe? What evidence do we have of a designer directly intervening into biology?

    I only know that [the designer] designs biological things, and must be conscious, intelligent and purposeful.

    Well, I think we could try to infer more than that – or not? Is the designer a biological organism? Or did the designer exist before life on earth existed? Is the designer a physical entity? What facts show that an entity is capable of intervening physically inside of the functions of every cell of every living being on earth? How complex is the designer? More complex than the algorithms you mentioned? Does the designer move from one cell present in Southern California, USA and then travel to intervene in another cell in Tanzania? Or does the designer do such interventions simultaneously? In either answer, are there some facts that show what the designer does in these cases? If simultaneously, how big is the designer and what mechanisms are used to intervene simultaneously into billions of cells at various points of time? Does the designer decide to intervene minute-by-minute based on various criteria? Or are the interventions pre-planned. Does the designer use tools to carry out interventions? Or does he have appendages that enable him to tweak mutations (like with his fingers)?

  96. 96
    gpuccio says:

    To all:

    OK, now let’s talk briefly of transposons.

    It’s really strange that transposons have been mentioned here as a confutation of my ideas. But life is strange, as we all know.

    The simple fact is: I have been arguing here for years that transposons are probably the most important tool of intelligent design in biology. I remember that an interlocutor, some time ago, even accused me of inventing the “God of transposons”.

    The simple fact is: there are many facts that do suggest that transposon activity is responsible for generating new functional genes, new functional proteins. And I think that the best intepretation is that transposon activity can be intelligently directed, in some cases.

    IOWs, if biological design is, at least in part, implemented by guided mutations, those guided mutations are probably the result of guided transposon activity. We have no certainty of that, but it is a very reasonable scenario, according to known facts.

    OK, but let’s put that into perspective, especially in relation to the confused and confounding statements that have been made or reported here about “random mutations”.

    I will refer to the following interesting article:

    The Impact of Transposable Elements in Genome Evolution and Genetic Instability and Their Implications in Various Diseases

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4196381/

    So, the first question that we need to answer is:

    a) How frequent are transposon-dependent mutations in relation to all other mutations?

    There is an answer to that in the paper:

    Recent studies have revealed the implications of TEs in genomic instability and human genome evolution [44]. Mutations associated with TE insertions are well studied, and approximately 0.3% of all mutations are caused by retrotransposon insertions [27

    0.3% of all mutations. So, let’s admit for a moment that transposon derived mutations are not random, as it has been siggested in this thread. That would still leave 99.7% of all mutations that could be random. Indeed, that are random.

    But let’s go on. I have already stated that I believe that transposons are an important tool of design. Therefore, at least some of transposon activity must be intelligently guided.

    But does that mean that all transposon activity is guided? Of course, absolutely not.

    I do believ that most transposon activity is random, and is not guided. Let’s read again from the paper:

    Such insertions can be deleterious by disrupting the regulatory sequences of a gene. When a TE inserts within an exon, it may change the ORF, such that it codes for an aberrant peptide, or it may even cause missense or nonsense mutations. On the other hand, if it is inserted into an intronic region, it may cause an alternative splicing event by introducing novel splice sites, disrupting the canonical splice site, or introducing a polyadenylation signal [8, 9, 10, 11, 42, 43]. In some instances, TE insertion into intronic regions can cause mRNA destabilization, thereby reducing gene expression [45]. Similarly, some studies have suggested that TE insertion into the 5′ or 3′ region of a gene may alter its expression [46, 47, 48]. Thus, such a change in gene expression may, in turn, change the equilibrium of regulatory networks and result in disease conditions (reviewed in Konkel and Batzer [43]).

    The currently active non-LTR transposons, L1, SVA, and Alu, are reported to be the causative factors of many genetic disorders, such as hemophilia, Apert syndrome, familial hypercholesterolemia, and colon and breast cancer (Table 1) [8, 10, 11, 27]. Among the reported TE-mediated genetic disorders, X-linked diseases are more abundant than autosomal diseases [11, 27, 45], most of which are caused by L1 insertions. However, the phenomenon behind L1 and X-linked genetic disorders has not yet been revealed. The breast cancer 2 (BRCA2) gene, associated with breast and ovarian cancers, has been reported to be disrupted by multiple non-LTR TE insertions [9, 18, 49]. There are some reports that the same location of a gene may undergo multiple insertions (e.g., Alu and L1 insertions in the adenomatous polyposis coli gene) (Table 1).

    And so on.

    Have we any reason to believe that that kind of transposon activity is guided? Not at all. It just behaves like all other random mutations, that are oftne cause of genetic diseases.

    Moreover, we know that deleterious mutations are only a fraction of all mutations. Most mutations, indeed, are neutral or quasi neitral. Therefore, it is absolutely reasonable that most transposon induced mutations are neutral too.

    And the design?

    The important point, that can be connected to Abel’s important ideas, is that functional design happens when an intelligent agnet acts to give a functional (and absolutely unlikely) form to a number of “configurable switches”.

    Now, the key idea here is that the switches must be configurable. IOWs, if they are not set by the designer, their individual configuration is in some measure indifferent, and the global configuration can therefore be described as random.

    The important point here is that functional sequences are more similar to random sequences than to ordered sequences. Ordered sequences cannot convey the functional information for complex function, because they are constrained by their order. Functional sequences, instead, are pseudo-random (not completely, of course: some order can be detected, as we know well). That relative freedom of variation is a very good foundation to use them in a designed way.

    So, the idea is: transposon activity is probably random in most cases. In some cases, it is guided. Pribably through some qunatum interface.

    That’s also the reason why a quantum interface is usually considered (by me too) as the best interface between mind and matter: because quantum phenomena are, at one level, probabilistic, random, and that’s exactly the reason why they can be used to implement free intelligent choices.

    To conclude, I will repeat, for the nth time, that a system is a random system when we cannot describe it deterministically, but we can proved some relatively efficient and useful description of it using a probability distribution.

    There is no such a thing as “complete randomness”. If we use a probability distributuon to describe a system, we are treating that system as a random system.

    Randomness is not an intrinsic property of evens (except maybe at quantum level). A random syste, like the tossing of a coin, is completely deterministic in essence. But we are not able to describe it deterministically.

    In the same way, random systems that do not follow an uniform distribution are random just the same. A loaded dice is as rando as a fair dice. But, if the laoding is so extreme that only one event can take place, that becomes a necessity system, that can very well be described deterministically.

    In the same way, there is nothing strange in the fact that some factrs, acting as necessity causes, can modify a probability distribution. As a random system is in reality deterministic in essence, of course is some of the variables that act in it is strong enought to be detected, that variable wil modify the probability dostribution in a detectable way. There is nothing strange in that. The system is stil random (we use a probabiliti dostribution to describe it), but we can detect one specific variable that modifies the probability distribution (what has been called here, not so precisely IMO, a bias). That’s the case, for example, of radiations increasing the rate and modifying the type of random mutations, as in the great incread of leukemia cases at Hiroshima after the bomb. That has always been well known, even is some people seem to discover it only now.

    In all those cases, we are still dealing with random systems: systems where each single event cannot be anticipated, but a probability distribution can rather efficiently describe the system. Mutations are a random system, except maybe for the rare cases of guded mutations in the coruse of biological design.

    Finally, let me say that, of all the things of which I have been accused, “assuming Methodologcal Naturalism as a starting assumption” is probably the funniest. Next time, they will probably accuse me of being a convinced compatibilist! 🙂

    Life is strange.

  97. 97
    gpuccio says:

    Silver Asiatic:

    “I don’t quite follow that. We create software that evaluates data and then produces functional information (visualizations). So, the design of that software happened when the visualization occurred? I think we normally say that the design occurred first in the mind of the software designer – it’s an idea (Mozart wrote symphonies entirely in his mind before putting on paper). Then, the designer creates algorithms that produce functional information. But the software is not the designer. It is the output of a designed process.”

    I perfectly agree. The designed object here is the software. The design happens when the designer writes the software, from his mind.

    I see your problem. let’s be clear. The software never designs anything, because it is not conscious. Design, by definition, is the output of form from consiousness to a materila object.

    But you seem to believe that the siftware creates new functional information. Well, it does in a measure, but it is not new complex functional information. this is a point that is often misunderstood.

    Let’s say that the software produces visualizations exactly as programmed to do. In that case, it is easy. All the functional information that we get has been designed when the software was designed.

    But maybe the software makes computation whose result was not previously known to the designer. that deos not change anything, The computation process has been designed anyway. And computations are algorithmic, they do not increase the Kolmogorov complexity of the system. And that complexity is the functional complexity.

    Finally, maybe the software uses new information from the environment. In that case, there will be some increse in functional information, but it will be very low, if the environment does not contain complex functional information. IOWs, the environment cannot teach a system how to build ATP synthase, except when the sequence of ATP syntghase (or, for that, of a Shakespeare sonnet in the case of language) is provided externally to the system.

    Now I must go. More in next post.

  98. 98
    Silver Asiatic says:

    GP
    Good answer, thank you.

    But maybe the software makes computation whose result was not previously known to the designer. that deos not change anything, The computation process has been designed anyway. And computations are algorithmic, they do not increase the Kolmogorov complexity of the system. And that complexity is the functional complexity.

    Yes, but I think this answers your question about a Designer who created algorithms. In a software output, it can be programmed to create information that was not known to the designer. That information actually causes other things to happen. I would think that it is the definition of complex, specified, functional information. We observe the software creating that information, and rightly infer that the information network (process) was designed. But do we, or can we know that the designer was unaware of what the software produced?
    I don’t think so. We do not have access to the designer’s mind. We only see the software and what it produces. We know it is the product of design. But we do not know if the functional information was designed for any specific instance, or if it is the output of a previous design farther back, invisible to us.
    This, I think, is the case in biology.
    I believe you are saying that the design occurs at various discrete moments where a designer intervenes, and not that the design occurred at some distant time in the past and is merely being worked out by “software”. What we observe shows functional information, but this information may either be created directly by the designer at the moment, or it may be an output of a designed system.
    I do not see how we could distinguish between the two options.
    With software, we can observe the inputs and calculations and we can determine that the software created something “new”‘. It is all the output of design, but we can trace what the software is doing and therefore infer where the “design implementation” took place.
    It’s that term that is the issue here, really.
    It is “design implementation”. Where and when was the design (in the mind of the designer) put into biology?
    I do not believe that is a question that ID proposes an answer for, and I also do not believe it is a scientific question.

  99. 99
    bornagain77 says:

    Gp states,

    “That would still leave 99.7% of all mutations that could be random. Indeed, that are random.”

    LOL, just can’t accept the obvious can he? Bigger men than you have gone to their deaths defending their false theories Gp. 🙂

    “It is difficult (if not impossible) to find a genome change operator that is truly random in its action within the DNA of the cell where it works. All careful studies of mutagenesis find statistically significant non-random patterns”
    James Shapiro – Evolution: A View From The 21st Century – (Page 82)

    To presuppose that the intricate molecular machinery in the cell is just willy nilly moving stuff around on the genome is absurd on its face. And yet that is ultimately what Gp is trying to argue for.

    Of note: It is not on me to prove a system is completely deterministic in order to falsify Gp’s model. I only have to prove that it is not completely random in order to falsify his model. And that threshold has been met.

    Perhaps Gp would also now like to still defend the notion that most (+90%) of the genome is junk?

  100. 100
    gpuccio says:

    Silver Asiatic:

    It’s not really question of knowing what is in the mind of the designer. The problem is: what is in material objects?

    Let’s go back to ATP synthase. Please, read my comment #74.

    So, I think we can agree that any algorithm that can compute the sequences for ATP synthase would be, by far, more complex than ATP synthase itself.

    So, let’s say, just for as moment, that the designer does not design ATP synthase directly. Let’s say that the designer designs the algorithm. After all, he is clever enough.

    So, he designs the algorithm. But, of course, he must implement it is a material object. A material object that can do the computations and then build the compute outcome (IOWs, ATP synthase).

    OK, so my simple question is: where is, or was, that object? The computing object?

    I am aware of nothing like that in the known universe.

    Maybe it existed 4 billion years ago, and now it is lost?

    Well, everything is possible, but what facts support such an idea?

    None at all. Have we traces of that algorithm, indications of how it worked? Have we any idea of the object where it was implemented? It seems reasonable that it was some biologcial object, probably an organism. So, what are we hypothesizing, that 4 billion years ago the designer designed and implemented some extremely complex organism capable of computing ATP synthase, only to compute ATP syntase for bacteria, and that such a complex organism then disappeared without leaving any trace of itself?

    What’s the sense of such a scenario? What scientific value has it? The answer is simple: none.

    Of course, the designer designed ATP synthase when it was needed, and not some mysterious algorithm, never seen, to compute its information.

    And there is more: such a complex algorithm, made to compute ATP synthase, could not certainly compute another, completely different, protein system, like for example the spliceosome. Because that’s another function, another plan. A completely different computation would be needed, a different purpose, a different context.

    So, what do we believe? That the designer designed, later, another complex organism with another comples algorithm to compute and realize the spliceosome? And the immune system? And our brain?

    Or that, in the beginning, there wa one organism so complex that it could compute the sequence of all future necessary proteins, protein systems, lncRNAs, and so on? A monster of which no trace has remained?

    OK, I hope that’s enough.

  101. 101
    gpuccio says:

    Silver Asiatic::

    You also say:

    “What evidence do we have of a designer directly intervening into biology?”

    That’s rather simple. The many examples, well known, of sudden appearance in natural history of new biological objects full of tons of new complex functional imformation, information that did not exist at all before.

    For example, I have analyzed quantitatively the transition to vertebrates, which happened more than 400 million years ago, in a time window of probably 20 million years, and which involved the appearance, for the first time in natural history, of about 1.7 million bits of new functiona information. Information that, after that time, has been conserved up to now.

    This is the evidence of a design intevrentio, specifically localized in time.

    Of course, there is instead no ecidence at all that the organisms that existed before included any complex algorithm capable of computing those 1.7 million bits of functional information.

    You say:

    “Well, I think we could try to infer more than that – or not? Is the designer a biological organism? Or did the designer exist before life on earth existed? Is the designer a physical entity? What facts show that an entity is capable of intervening physically inside of the functions of every cell of every living being on earth? How complex is the designer? More complex than the algorithms you mentioned? Does the designer move from one cell present in Southern California, USA and then travel to intervene in another cell in Tanzania? Or does the designer do such interventions simultaneously? In either answer, are there some facts that show what the designer does in these cases? If simultaneously, how big is the designer and what mechanisms are used to intervene simultaneously into billions of cells at various points of time? Does the designer decide to intervene minute-by-minute based on various criteria? Or are the interventions pre-planned. Does the designer use tools to carry out interventions? Or does he have appendages that enable him to tweak mutations (like with his fingers)?”

    These are good questions. To many of them, we cannot at present give answers. But not all.

    “Is the designer a biological organism? Is the designer a physical entity?”

    I will answer these two together. While we cannot say who or what the designer (or designers) is, I find very reasonable that he should not be a physical organism. The reason for that is, again, empirical, and is similar to my “confutation” of the imaginary algorithm: if one or more physical designers had been acting on our planet throughout natural history, we should have some evidence of that. But there is none. So the best hypothesis is that the designer or designers are not physical like us.

    “Did the designer exist before life on earth existed?”

    This is easy. A designer was certainly responsible for OOL on our planet. OOL is of course one of the events that scream design with the highest strength. So the answer is yes: the designer, or at least the designer who designed life on our planet, certainly existed before.

    “What facts show that an entity is capable of intervening physically inside of the functions of every cell of every living being on earth?”

    Well, we humans, as conscious beings, are entities capable of intervening inside the functions of most cells in our brain or nervous system, adn amny in our bodies. That’s how our consiousness is interfaced to our body.

    Why shouldn’t some other conscious entity be able to do something similar with biological organisms? And again, there is no need that the interface reach all cells of all organisms. The strict requirement is for those organisms where the design takes place.

    “How complex is the designer? ”

    We don’t know. How complex is our consciousness, if separated from our body? We don’t know how complex non physical entities need to be. Maybe the designer is very simple. Or not.

    This answer is valid for many other questions: we don’t understand, at present, how consciousness can work outside of a physical body. Maybe we will understand more in the future.

    “Does the designer decide to intervene minute-by-minute based on various criteria? Or are the interventions pre-planned.”

    I don’t know when or how the designer decides things. But I know when he does things. For example, he introduced the functional information for vertebrates, all those 1.7 million bits, in some pre-existing organism (probably the first chordates), approximately in those 20 million years when vertebrates appear on earth.

    “Does the designer use tools to carry out interventions? Or does he have appendages that enable him to tweak mutations (like with his fingers)?”

    Most likely he uses tools. Of course the designer’s consciousness needs to interface with matter, otherwise no design could be possible. That is exactly what we do when our consciousness interfaces with our brain. So, no big problem here.

    The interface is probably at quantum level, as it is probably in our brains. There are many events in cells that could be more easily tweaked at quantum level in a consciousness related way. Penrose believes that a strict relationship exists in our brain between consciousness and microtubules in neurons. Maybe.

    I think, as I have said many times, that the most likely tool of design that we can identify at present are transposons. The insertions of transposons, usually random (see my previous posts), could be easily tweaked at quantum level by some conscious intervention. And there is some good evidence that transposons are involved in the generation of new functional genes, even in primates.

    That’s the best I can do to answer your questions. Just a note: my answers here are highly tentative, but they are strictly empirical. They do not derive from any theological agenda. These are the ideas that, in my opinion, are more reasonable to explain the facts as we know them. Right or wrong that they may be, this is the spirit in which I express them.

  102. 102
    PeterA says:

    GP,
    The first graphic illustration shows the mechanism of NF-kB action, which you associated with the canonical activation pathway “summarized” in figure 1.
    The figure 1 -without breaking it into more details- could qualify as a complex mechanism.
    Is it possible that such an explicit graphic illustration, which includes so many details, is a simplification of the real thing? Aren’t all the control procedures associated with this mechanism shown in the figure? Are any important details missing, or just irrelevant details?
    Well, you answered those question when you elaborated on those details in the OP.
    In this particular example, we first see the “signals” shown in figure 1 under the OP section “The stimuli”.
    Thus, what in the figure 1 appears as a few colored objects and arrows is described in more details, showing the tremendous complexity of each step of the graphic, specially the receptors in the cell membrane.
    Can the same be said about every step within the figure?

  103. 103

    .

    Luckily, some friends are ready to be fiercely antagonistic!

    Yes, I see that.

    Illuminating thread otherwise.

  104. 104
    pw says:

    GP,

    Fascinating topic and interesting discussion, though sometimes unnecessarily personal. Scientific discussions should remain calm, focused in on details, unbiased. At the end we want to understand more. Undoubtedly biology today is not easy to understand well in all details and it doesn’t look like it could get easier anytime soon.

    Someone asked:

    “What evidence do we have of a designer directly intervening into biology?”

    Could the answer include the following issues?

    OOL, prokaryotes, eukaryote, and according to Dr Behe, who said that at one point he would point to the class level, now he would focus it on at least at the family level, where the Darwinian paradigm lacks explanatory power for the physiological differences between cats and dogs allegedly proceeding from a common ancestor.

    You have pointed to the intentional insertion of transposable elements into the genetic code asanother empirical evidence. I think you’ve also mentioned the splicing mechanisms. Perhaps any of the complex functional mechanisms that appeared at some points could be attributed to conscious intentional design?

    Does CD stand for common design or common descent with designed modifications?
    Does “common” relate to the observed similarities ?

    For example, in the case of cats and dogs, “common” relates to their observed anatomical and/or physiological similarities, which were mostly designed too?

  105. 105
    gpuccio says:

    Upright BiPed:

    “Illuminating thread otherwise.”

    Thank you! 🙂

  106. 106
    gpuccio says:

    PeterA:

    “Is it possible that such an explicit graphic illustration, which includes so many details, is a simplification of the real thing?”

    Of course it is. A gross simplification. many important details are missing.

    For example:

    Only two kinds of generic signals and receptors are shown. As we have seen, there are a lot of different specific receptors.

    The pathways that connect each specific type of receptor to IKK are not shown (they are shown as simple arrows). But they are very complex and specific. I have given some limited information in the OP and in the discussion.

    Only the canonical pathway is shown.

    Only the most common type of dimer is shown.

    Coactivators and interactions with other pathways are not shown or barely mentioned.

    Of course, lncRNAs are not shown.

    And so on.

    Of course, the figure is there just to give a first general idea of the system.

  107. 107
    gpuccio says:

    Pw:

    “Could the answer include the following issues?”

    Yes, of course.

    “You have pointed to the intentional insertion of transposable elements into the genetic code asanother empirical evidence. I think you’ve also mentioned the splicing mechanisms. Perhaps any of the complex functional mechanisms that appeared at some points could be attributed to conscious intentional design?”

    All of them, if they are functionally complex. That’s the theory. That’s ID. The procedure, if correctly applied, should have no false positives.

    “Does CD stand for common design or common descent with designed modifications?”

    CD stands just for “common descent”. I suppose that each person can add his personal connotations. Possibly making them explicit in the discussion.

    I have explained that for me common descent just means a physical continuity between organisms, but that all new complex functional information is certainly designed. Without exceptions.

    So, I suppose that “common descent with designed modifications” is a good way to put it.

    Just a note about the universality. Facts are very strong in supporting common descent (in the sense I have specified). It remains open, IMO, if it is really universal: IOWs, if all forms of life have some continuity with a single original event of OOL, or if more than one event of OOL took place. I think that at present universality seems more likely, but I am not really sure. I think the question remains open. For example, some differences between bacteria and archea are rather amazing.

    “Does “common” relate to the observed similarities ?”

    Common, in my version of CD, refers to the physical derivation (for existing information) from one common ancestor. So, let’s say that at some time there was in the ocean a common ancestor of vertebrates: maybe some form of chordate. And at some time, vertebrates are already split into cartilaginous fish and bony fish. If both cartilaginous fish and bony fish physically reuse the same old information from a common ancestor, that is common descent, even of course all the new information is added by specific design.

    I really don’t understand how that could be explained without any form of physical descent. Do they really believe that cartilaginous fish were designed from scratch, from inanimate matter, and that bony fish were too designed from scratch, from inanimate matter, but separately? And that the supposed ancestor, the first chordates, were also designed from scratch? And the first eukaryotes? And so on?

  108. 108
    gpuccio says:

    PeterA:

    Maybe you can look at this more detailed figure for the different stimuli, receptors and receptor connections to the activation pathway:

    https://rockland-inc.com/nfkb-signaling-pathway.aspx

  109. 109
    ET says:

    gpuccio:

    Of course, the designer designed ATP synthase when it was needed, and not some mysterious algorithm, never seen, to compute its information.

    The Designer is never seen.

    The point of the algorithm was to address the “how” the Intelligent Designer designed living organisms and their complex parts and systems. The way ATP synthase works, by squeezing the added “P” onto ADP and not by some chemical reaction, is a clue- for me, anyway. It just seems like something an algorithm would tease out- and that comes from knowledge of many GA’s that have created human inventions.

    That would still leave 99.7% of all mutations that could be random. Indeed, that are random.

    I would love to see how you made that determination, especially in the light of the following:

    He [the Designer] indeed seems to have “carefully crafted” information in His species giving them the ability to respond to environmental stimuli to alter their own genome to adapt to new environments. He then evidently let them wander where they will with the ability to adapt.- Dr. Lee Spetner “the Evolution Revolution” p 108

  110. 110
    Silver Asiatic says:

    Gpuccio
    Thank you for your detailed replies on some complex questions. You explained your thoughts very clearly and well.

    Of course, there is instead no evidence at all that the organisms that existed before included any complex algorithm capable of computing those 1.7 million bits of functional information.

    I think what science can show is that 1.7 million bits of FI appear. What is not shown is how they appeared there. Regarding a complex algorithm, the designing-mind itself had to have immense capabilities. Algorithms are programmatic functions which start in a mind. Organisms could have been programmed to trigger innovations over time.

    While we cannot say who or what the designer (or designers) is, I find very reasonable that he should not be a physical organism. The reason for that is, again, empirical, and is similar to my “confutation” of the imaginary algorithm: if one or more physical designers had been acting on our planet throughout natural history, we should have some evidence of that. But there is none. So the best hypothesis is that the designer or designers are not physical like us.

    Here is where it starts to get difficult. On the same basis that we say that there is no evidence of physical designers, we have to say there is no evidence if immaterial designers. Science cannot evaluate immaterial entities. So, our speculations here take us outside of science. I don’t think we can say that we have empirical evidence of immaterial entities or beings. The absence of evidence (in this case of physical designers) does not mean that we have direct evidence of immaterial designers.

    This is easy. A designer was certainly responsible for OOL on our planet. OOL is of course one of the events that scream design with the highest strength. So the answer is yes: the designer, or at least the designer who designed life on our planet, certainly existed before.

    That is good. We do not know if there is one or multiple designers, or if the designer of life is the same as the one who developed and changed life. But some designing intelligence existed before life on earth did. That designer would not be a terrestrial, biological entity.

    Well, we humans, as conscious beings, are entities capable of intervening inside the functions of most cells in our brain or nervous system, adn amny in our bodies. That’s how our consiousness is interfaced to our body.
    Why shouldn’t some other conscious entity be able to do something similar with biological organisms?

    I don’t think we have any direct, scientific experience with an immaterial, pre-biological conscious entity. Additionally, we do not see that human consciousness can create life, for example, or that it could mutate bacteria to create plants, birds, fish, animals and other humans. We don’t see that human consciousness can intervene and create other consciousnesses. We might say that the entire population of human beings has affected the earth – would this suggest that there is a huge population of designers affecting mutations?

    And again, there is no need that the interface reach all cells of all organisms. The strict requirement is for those organisms where the design takes place.

    I’d think that the activity of mutations within organisms is such that a continual monitoring would be required in order to achieve designed effects, but perhaps not. Even if it is only cells where there were innovations that seems to be quite a lot of intervention.

    We don’t know. How complex is our consciousness, if separated from our body? We don’t know how complex non physical entities need to be. Maybe the designer is very simple. Or not.

    I think this cuts against your concern about complex algorithms. The designer may be very complex. Algorithms created by the designer may be complex also. Additionally, I do not think that science has established that human consciousness is a non-physical entity, or that human consciousness can exist separated from a body.

    I don’t know when or how the designer decides things. But I know when he does things. For example, he introduced the functional information for vertebrates, all those 1.7 million bits, in some pre-existing organism (probably the first chordates), approximately in those 20 million years when vertebrates appear on earth.

    The options I see for this introduction of information are:
    1. Direct creation of vertibrates
    2. Guided or tweaked mutations
    3. Pre-programmed innovations that were triggered by various criteria
    4. Mutation rates are not constant but can be accelerated at times
    5. We don’t know

  111. 111
    Silver Asiatic says:

    GP

    So, I think we can agree that any algorithm that can compute the sequences for ATP synthase would be, by far, more complex than ATP synthase itself.

    I think we have to say that we do not know. As previously, you stated that we do not know how complex the designer is. An algorithm is method of calculation which would be resident in the mind of the designer. The level of complexity of that algorithm, for a designer capable of creating life on earth, does not seem to be a problem.

    So, he designs the algorithm. But, of course, he must implement it is a material object. A material object that can do the computations and then build the compute outcome (IOWs, ATP synthase).

    The algorithm could be computed by an immaterial entity. The designer, I think you’re saying, created immaterial consciousnesses (human) so could create immaterial algorithms that programmed life from the beginning. So, there would be one single Design act, and then everything after that is an output.

    OK, so my simple question is: where is, or was, that object? The computing object?
    I am aware of nothing like that in the known universe.

    If the computing agent is immaterial then you could have no scientific evidence of it.

    So, what are we hypothesizing, that 4 billion years ago the designer designed and implemented some extremely complex organism capable of computing ATP synthase, only to compute ATP syntase for bacteria, and that such a complex organism then disappeared without leaving any trace of itself?

    I think we are saying that science cannot know this. Additionally, you refer to “the designer” but there could be millions of designers. Again, science cannot make a statement on that.

    What’s the sense of such a scenario? What scientific value has it? The answer is simple: none.

    You propose an immaterial designer — it is subject to conditions of space and time? In any case, that proposal can have no scientific value. Science cannot directly investigate immaterial entities. Science can look at effects of entities, but cannot evaluate them.

    Of course, the designer designed ATP synthase when it was needed, and not some mysterious algorithm, never seen, to compute its information.

    I don’t think that conclusion is obvious. Why did the design have to occur when needed and not before. And again the algorithm could have been administered by an immaterial agent, which we never could observe scientifically. There’s no way for science to know this.

  112. 112
    gpuccio says:

    ET at #109:

    The Designer is never seen.

    Correct. But, as I have said, the designer needs not be physical. I believe that consciousness can exist without being necessarily connected to a physical body. I have explained at #101 (to SilverAsiatic). I quote myself:

    “Is the designer a biological organism? Is the designer a physical entity?”

    I will answer these two together. While we cannot say who or what the designer (or designers) is, I find very reasonable that he should not be a physical organism. The reason for that is, again, empirical, and is similar to my “confutation” of the imaginary algorithm: if one or more physical designers had been acting on our planet throughout natural history, we should have some evidence of that. But there is none. So the best hypothesis is that the designer or designers are not physical like us.

    An algorithm, instead, needs to be physically instantiated. An algorithm is not a conscious agent. It works like a machine. It need a physical “body” to exist and work.

    The way ATP synthase works, by squeezing the added “P” onto ADP and not by some chemical reaction, is a clue- for me, anyway. It just seems like something an algorithm would tease out- and that comes from knowledge of many GA’s that have created human inventions.

    ATP synthase squeezes the P using mechanical force from a proton gradient. It works like a water mill. Do you really believe that any generic algorithm would design such a thing, if the designer does not code the idea and a lot of details in the algorithm itsef?

    Algorithms compute, and do nothing else. They are sophisticated abacuses, nothing more. The amazing things that they do are simply due to the specific cponfigurations designed for them by conscious intelligent beings.

    Maybe the designer needed some algorithm to do the computations, if his computing ability is limited, like ours. Maybe not. But, if he used some algorithm, it seems not to have happened on this planet, ot he accurately destroyed any trace of it. Don’t you think that these are just ad hoc reasonings?

    I would love to see how you made that determination, especially in the light of the following:

    I am not aware that waht Spetner says is true by default. Again, I don’t know his thought in detail, and I don’t want to judge.

    But there are a lot of facts that tell us that most mutations are random, neutral or deleterious. I have mentioned the many human diseases causes by mutations that follow no specific pattern, botn normal mutations and transposon associated. See comments #64 and #96.

    The always precious Behe has clearly shown that differentiation at low level (let’s say inside families) is just a matter of adaptation thorugh loss of information, never a generation of new functional information. To be clear, the loss of information is random, due to deleterious mutations, and the adaptation id favoured by an occasionl advantage gained in specific environments, therefore to NS. This is the level where the neo-darwinian model works. But without generating any new functional information. Just by losing part of it. This is Behe’s model (see polar bears). And it is mine, too.

    For the rest, actual design is always needed.

  113. 113
    ET says:

    gpuccio:

    Do you really believe that any generic algorithm would design such a thing, if the designer does not code the idea and a lot of details in the algorithm itsef?

    I don’t see any issues with it. There is a Scientific America article from over a decade ago titled “Evolving Inventions”. One invention had a transistor in it that did not have its output connected to anything. The point being is the only details required are what is needed to get the job done, ie connecting a “P” to ADP.

    But there are a lot of facts that tell us that most mutations are random, neutral or deleterious. I have mentioned the many human diseases causes by mutations that follow no specific pattern, botn normal mutations and transposon associated.

    And for every genetic disease there are probably thousands of changes that do not cause one.

  114. 114
    gpuccio says:

    Silver Asiatic at #110:

    I think what science can show is that 1.7 million bits of FI appear. What is not shown is how they appeared there.

    Well, when you have facts science has to propose hygpotheses to explain them. Neo-darwinism is one hypothesis, and it does not explain what it should explain. Design is another hypothesis. You can’t just say: it happened, and not try to explain it. That’s not science.

    Regarding a complex algorithm, the designing-mind itself had to have immense capabilities. Algorithms are programmatic functions which start in a mind. Organisms could have been programmed to trigger innovations over time.

    Everything is possible. But my points are:

    a) There is no trace of those algorithms. They are just figments of the imagination.

    b) There are severe limits to what an algorithm can do. An algorithm cannot find solutions to problems for which it has not been programmed to find solutions. An algorithm just computes. Only consciousness has cognitive representations, understanding and purpose.

    Regarding innovations, I am afraid they are limited to what Behe describes, plus maybe some limited cases of simple computational adaptation. Innovations exist, but they are always simple.

    Here is where it starts to get difficult. On the same basis that we say that there is no evidence of physical designers, we have to say there is no evidence if immaterial designers. Science cannot evaluate immaterial entities. So, our speculations here take us outside of science. I don’t think we can say that we have empirical evidence of immaterial entities or beings. The absence of evidence (in this case of physical designers) does not mean that we have direct evidence of immaterial designers.

    I strongly disagree. Here you are indeed assuming methodologic naturalism, something that I consider truly bad philosophy of science (even if I have been recently accused of doing exactly that).

    Science can investigate anything that produces observable facts. In no way it is limited to “matter”. Indeed, many of the most important concepts in science have nothing to do with matter. Abd science does debate ideas and realities about which we still have no clear understanding, see dark matter and especially dark energy. Why, Because those things, whatever they may be, seem to have detectable effects, to generate facts.

    Moreover, consciousness is in itself a fact. It is subjectively perceived by each of us (you too, I suppose). Therefore is can and must be investigated by science, Even if, at present, science has no clear theory about what consciousness is.

    Design is an effect of consciousness. There is no evidence that consciousness need to be physical. Indeed, there are good evidences of the constrary, but I will not discuss them now.

    However, design, functional information and consciousness are certainly facts that need to be investigated by science. Even if the best explanation, maybe the only one, is the intervention of some non physical conscious agent.

    That is good. We do not know if there is one or multiple designers, or if the designer of life is the same as the one who developed and changed life. But some designing intelligence existed before life on earth did.

    Correct.

    That designer would not be a terrestrial, biological entity.

    Not physical, therefore not biological. Terrestrial? I don’t know. a non physical entity could well, in principle be specially connected to out planet. Or not, of course. If we don’t know we don’t know.

    I don’t think we have any direct, scientific experience with an immaterial, pre-biological conscious entity. Additionally, we do not see that human consciousness can create life, for example, or that it could mutate bacteria to create plants, birds, fish, animals and other humans. We don’t see that human consciousness can intervene and create other consciousnesses. We might say that the entire population of human beings has affected the earth – would this suggest that there is a huge population of designers affecting mutations?

    You seem to make some confusion about three different concepts: functional information, life and consciousness.

    ID is about the origin of functional information, in particular the functional information we observe in living organisms. It can say nothing about what life and consciousness are, least of all about how to generate those things.

    Functional information is a configuration of material objects to implement some function in the world we observe. Nothing else. Complex functional information originates only from conscious agents (we know that empirically), but it tells us nothing about what consciousness is or how it is generated. And life itself cannot easily be defined, and it is probably more than the information it needs to exist.

    As humans, we can design functional information. We can also design biological functional information, even rather complex. OK, we are not really very good. We cannot design anything like ATP synthase. But, in time, we can improve.

    Designers can design complex functional information. More or less complex, good or bad. But they can do it. But human designers, at present, cannot generate life. Indeed. we don’t even know what life is. Even more that is true of consciousness.

    And again, I don’t think we can say how many designers have contributed to biological design. Period.

    Even if it is only cells where there were innovations that seems to be quite a lot of intervention.

    It is a lot of intervention. And so?

    I think this cuts against your concern about complex algorithms. The designer may be very complex. Algorithms created by the designer may be complex also.

    He could also be very simple.

    I do not think that science has established that human consciousness is a non-physical entity, or that human consciousness can exist separated from a body.

    Science has established practically nothing about the nature of consciousness. But there is time. Certainly, it has not established that consciousness derives fron the physical body.

    The options I see for this introduction of information are:
    1. Direct creation of vertibrates
    2. Guided or tweaked mutations
    3. Pre-programmed innovations that were triggered by various criteria
    4. Mutation rates are not constant but can be accelerated at times
    5. We don’t know

    5 is true enough, but after that 2 is the only reasonable hypothesis. Intelligent selection can have a role too, of course, like in human protein engineering. But I think that transposons act as a form of guided mutation.

  115. 115
    bornagain77 says:

    Gp states, ” I think that at present universality seems more likely, but I am not really sure. I think the question remains open.”

    Thank you very much for at least admitting that degree of humility on your part.

  116. 116
    gpuccio says:

    Silver Asiatic at #111

    I think we have to say that we do not know. As previously, you stated that we do not know how complex the designer is. An algorithm is method of calculation which would be resident in the mind of the designer. The level of complexity of that algorithm, for a designer capable of creating life on earth, does not seem to be a problem.

    I disagree. Algorithms, as I have already explained, are configurations of material objects. We were discussing algorithms on our planet, not imaginary algorithms in the mind of a conscious agent of whom we know almost nothing.

    My statement was about a real alforithm really implemented in material objects. To compute ATP synthase, that algorithm would certainly be much more complex than ATP stnthase itself.

    But all these reasonings are silly. We have no example of algorithms in nature, even in the biological world, which do compute new complex functional objects. Must we still waste our time with fairy tales?

    The algorithm could be computed by an immaterial entity. The designer, I think you’re saying, created immaterial consciousnesses (human) so could create immaterial algorithms that programmed life from the beginning. So, there would be one single Design act, and then everything after that is an output.

    OK, I hope it’s clear that this is the theory I am criticizing. Certainly not mine.

    And I have never said, or discussed, that “The designer created immaterial consciousnesses (human)”. As said, ID can say nothing about the nature of consciousness. ID just says that functional information derives from consciousness. And the designer needs not have “created” anything. Design is not creation.

    The designer designs biological information. Not human consciousness, or any other consciousness, Not “immaterial algorithms”. design is the configuration of material objects, starting from cosncious representations of the designer. As said so many times.

    If the computing agent is immaterial then you could have no scientific evidence of it.

    Not true, as said. Immaterial realities that cause observable facts can be inferred from those facts.

    Instead, a physical algorithm existing on our planet should leave some trace of its physical existence. This was my simple point.

    You propose an immaterial designer — it is subject to conditions of space and time? In any case, that proposal can have no scientific value. Science cannot directly investigate immaterial entities. Science can look at effects of entities, but cannot evaluate them.

    Not having a physical body does not necessarily mean that an entity is not subject to space and time. The interventions of the designer on matter are certainly subject to those things.

    About science, I have already answered. Indeed, ID is not evaluating anything about the designer, except for what can be inferred by the observable effects of his interventions.

  117. 117
    gpuccio says:

    ET at #113:

    I don’t see any issues with it.

    Well, I do. Let’s say that we have different ideas about that.

    And for every genetic disease there are probably thousands of changes that do not cause one.

    Of course. And they are called neutral or quasi neutral random mutations. When they are present in more than 1% of the whole population, they are called polymorphisms.

  118. 118
    gpuccio says:

    PeterA and all:

    An interesting example of complexity is the CBM signalosome. As said briefly in the OP, it is a protein complex made of three proteins:

    CARD11 (Q9BXL7): 1154 AAs in the human form. Also known as CARMA1.
    BCL10 (O95999): 233 AAs in the human form.
    MALT1 (Q9UDY8): 824 AAs in the human form.

    These three proteins have the central role in transferring the signal from the specific immune receptors in B cells (BCR) and T cells (TCR) to the NF-kB activation system (see Fig. 3 in the OP).

    IOWs, they signal the recognition of an antigen by the specific receptors on B or T cells, and start the adaptive immune response. A very big task.

    The interesting part is that those proteins practically appear in vertebrates, because the adaptive immune system starts in jawed fishes.

    So, I have made the usual analysis for the information jump in vertebrates of these three proteins. Here are the results, that are rather impressing, especially for CARD11:

    CARD11: absolute jump in bits: 1280; in bits per aminoacid (bpa): 1.109185

    BCL10: absolute jump in bits: 165.1; in bits per aminoacid (bpa): 0.7085837

    MALT1: absolute jump in bits: 554; in bits per aminoacid (bpa): 0.6723301

    I am adding to the OP a graphic that shows the evolutionary history of those three proteins, in terms of human conserved information.

  119. 119
    EugeneS says:

    GP (101)

    “…we should have some evidence of that. But there is none.”
    This is where you lost me. Isn’t what you so painstakingly analyse here and in other OPs something that constitutes the said evidence? Maybe I am wrong and I have missed out part of the conversation. But it is exactly what we observe that strongly suggests design. It is precisely that. All the rest is immaterial. Consequently, it must be the evidence that you are saying does not exist. I hope I am just misinterpreting what you said there.

  120. 120
    gpuccio says:

    EugeneS:

    The statement was:

    ““Is the designer a biological organism? Is the designer a physical entity?”

    I will answer these two together. While we cannot say who or what the designer (or designers) is, I find very reasonable that he should not be a physical organism. The reason for that is, again, empirical, and is similar to my “confutation” of the imaginary algorithm: if one or more physical designers had been acting on our planet throughout natural history, we should have some evidence of that. But there is none. So the best hypothesis is that the designer or designers are not physical like us.”

    What I mean is that the continuing presence of one or more physical designers, with some physical body, should have left some trace, reasonably. A physical designer has to be physically present at all design intervertions. And physical agents, usually, leave some trace of themselves. I mean, betond the design itself.

    Of course the design itself is evidence of a designer. But in the case of a non physical designer, we don’t expect to find further physical evidence, beyond the design itself. In the case of a physical designer, I would expect something, especially considering the many acts of design in natural history.

    This is what I meant.

  121. 121
    PeterA says:

    GP @108:

    “Maybe you can look at this more detailed figure for the different stimuli, receptors and receptor connections to the activation pathway:
    https://rockland-inc.com/nfkb-signaling-pathway.aspx

    Oh, no! Wow!
    OK, you have persuaded me.
    I’m convinced now.

    Thanks!

  122. 122
    EugeneS says:

    GP

    Yes, of course. I agree. I have missed out ‘physical’.

    Maybe, it is a distraction from the thread but anyway. I recall one conversation with a biologist. I had posted something against Darwin’s explanation of why we can’t see another sort of life emerging. Correct me if I am wrong but my understanding is that, basically, Darwin claimed that organic compounds that would have easily become life are immediately consumed by the already existing life forms. I was saying that this is a rubbishy argument. But according to my interlocutor, it actually wasn’t. My friend said it wss extremely difficult to get rid of life in an experimental setting for abiogenesis. In relation to what we are discussing here, this claim effectively means that the existing life allegedly devours any signs of emerging life as soon as they appear. My answer at the time was, why don’t they put their test tubes in an autoclave? He said that this was not so easy as I thought as getting rid of existing life also destroys the organic chemicals, and defeats the purpose.

    Today, I still strongly believe it is a bad argument but for a different reason, i.e. due to the impossibility of the translation apparatus that relies on a symbolic memory and semiotic closure self-organizing. There is no empirical warrant to back the claim that such self-organization is possible.

    What do you think about Darwin’s argument and, in particular, about the difficulty of creating the right conditions for a clean abiogenesis experiment?

  123. 123
    gpuccio says:

    EugeneS:

    Of course they would never succeed, in an autoclave or elsewhere.

    I suppose that Darwin’s argument was that, in the absence of existing life, the first organic molecules generated (by magic, probably) would have been more stable than what we can expect today. Indeed, today simple organic molecules have very short life in any environment because of existing forms of life.

    The argument is however irrelevant. The simple truth is that simple organic molecules (Darwin was probably thinking of proteins, today they should be RNA to be fashionable) are completely useless to build life of any form.

    Let’s be serious: even if we take all components, membrane, genome, and so on, for example by disrupting bacteria, and put them together in a test tube, we can never build a living cell.

    This is the classic humpty dumpty argument, made here time ago, if I remember well, by Sal Cordova. It remains a formidable argument.

    All reasonings about OOL from inanimate matter are, really, nothing more than fairy tales, They don’t even reach the status of bad scientific theories.

  124. 124
    Silver Asiatic says:

    GPuccio

    Again, thank you for clarifications and even repeating things you stated before. It has been very helpful.
    I am not fully understanding several of your points which I will illustrate below:

    GP Science can investigate anything that produces observable facts. In no way it is limited to “matter”.

    Do you think that science can investigate God?

    And the designer needs not have “created” anything. Design is not creation.

    I believe that design is the ultimate creative act. Design is an action of creation with and for a purpose. It begins as a creative act in a conscious mind – a thought which did not exist before is created for a purpose. This thought is then implemented through various means. But how can there be design without creation? How can a purposeful act occur without it having been created by a mind?

    Not having a physical body does not necessarily mean that an entity is not subject to space and time.

    How are immaterial objects constrained by space and time? What measurements can be performed on immaterial entities?

    Indeed, ID is not evaluating anything about the designer…

    As I quoted you above ” Science can investigate anything that produces observable facts”, why is not ID evaluating the designer?

    The designer designs biological information. Not human consciousness, or any other consciousness,

    What scientific evidence do you have to show that the designer did not design human consciousness? Where do you think human consciousness comes from?

    Not “immaterial algorithms”. design is the configuration of material objects, starting from cosncious representations of the designer.

    Again, an algorithm is a process or set of rules used for calculation or programmatic purposes. A designer can create an immaterial algorithm in an agent that acts on biological entities. There could be no direct evidence of such a thing, but the effects of it can be seen in the development of biological organisms.

  125. 125
    Silver Asiatic says:

    GP

    design is the configuration of material objects

    I mentioned Mozart’s symphonies which were designed in his conscious mind. They weren’t designed on paper or by musical instruments.

    Also, if an immaterial entity created other immaterial entities, you would say “that is not an act of purposeful design”?

  126. 126
    PeterA says:

    GP @106:
    Regarding Fig. 1 in the OP:
    “the figure is there just to give a first general idea of the system”
    I agree. And it does it very well, specially within the context of the fascinating topic of your OP.
    Even without the missing information that you listed:

    Only two kinds of generic signals and receptors are shown. As we have seen, there are a lot of different specific receptors.
    The pathways that connect each specific type of receptor to IKK are not shown (they are shown as simple arrows). But they are very complex and specific. I have given some limited information in the OP and in the discussion.
    Only the canonical pathway is shown.
    Only the most common type of dimer is shown.
    Coactivators and interactions with other pathways are not shown or barely mentioned.
    Of course, lncRNAs are not shown.

    the figure has many details that give a convincing idea of functional complexity.
    Thus, after carefully studying the figure to understand the flow of functional information, you reveal how much is still missing, one can only wonder how would anyone believe that such a system could arise through unguided physio-chemical events.

  127. 127
    EugeneS says:

    GP

    Thanks very much. Could you point to the ‘humpty dumpty’ OP you mentioned?

  128. 128
    gpuccio says:

    Silver Asiatic:

    Do you think that science can investigate God?

    As said many times, I don’t discuss God in a scientific context.

    The correct answer is always the same: science can, and must, investigate, everything that can be observed in reality. And, from observed things, infer ever better models of reality. Given that very broad definition, there are no other limitations.

    I believe that design is the ultimate creative act. Design is an action of creation with and for a purpose. It begins as a creative act in a conscious mind – a thought which did not exist before is created for a purpose. This thought is then implemented through various means. But how can there be design without creation? How can a purposeful act occur without it having been created by a mind?

    You are equivocating on the meaning of “creation”. Of course all actd of design are “creative” in a very general sense. But of course, as everyone can understand, that was not the sense I was using. I was clearly speaking of “creation” in the specific philosophical/religious meaning: generating some reality from nothing. Design is not that. In materila objects, design gives specific configurations to existing matter.

    I always speak of design according to that definition, that I have given explicitly here:

    https://uncommondescent.com/intelligent-design/defining-design/

    This definition is the only one that is necessary in ID, because ID infer design from the material object.

    You speak of a “creative act in a conscious mind”. Maybe, maybe not. We have no idea of how thoughts arise in a conscious mind. Moreover, as we are not trying to build a theory of the mind, or of consciousness, we are not interested in that.

    The process of design begins when some form, already existing in the consciousness of the designer as a representation, is outputted to a material object. That is the process of design. That is what we want to infer from the material object. It is not creation, only the input of a functional configuration to an object.

    How are immaterial objects constrained by space and time? What measurements can be performed on immaterial entities?

    Energy is not material, yet it exists in space and time. Dark energy is probably not material: indeed, we don’t know what it is. Can you say that it cannot exist in relation to space and time? Strange, because it apparently accelerates the expansion of the universe, and that seems to be in relation, very strongly, with space and time.

    If we can or cannot measure something has nothing to do with the properties of that something. Things don’t wait for our measures to be what they are. Our ability to measure things evolves with our understanding of what things are.

    You quote me saying: “Indeed, ID is not evaluating anything about the designer…” and then you comment:

    As I quoted you above ” Science can investigate anything that produces observable facts”, why is not ID evaluating the designer?

    This is quote mining of the worst kind. The original statement was:

    ” Indeed, ID is not evaluating anything about the designer, except for what can be inferred by the observable effects of his interventions.”

    Shame on you.

    What scientific evidence do you have to show that the designer did not design human consciousness? Where do you think human consciousness comes from?

    Again, misinterpretation, maybe intentional. Of course I am speaking of what we can infer according to ID theory, The designer that we infer in ID is the designer of biological information. We infer nothing about the generation of consciousness (I don’t use the term design, because as I have explained I speak of design only for materila objects). As said, nobody here is trying to build a theory of consciousness. I have alredy stated clearly that IMO science has no real understanding of what consciousness is, least of all of how it originates. We can treat consciousness as a fact, because it can be directly observed, but we don’t understand what it is.

    Could the designer of ciological objects be also the originator of human consciousness? Maybe. Maybe not. I have nothing to infer an answer. Certainly not in ID theory. Which is what we are discussing here. And certainly I have no duty to show that the designer did not originate human consciousness, or that he did, because I have made absolutely no inferences about the origin of human consciousness. I have only said that we infer a designer for biological objects, not for human cosnciousness.

    Again, an algorithm is a process or set of rules used for calculation or programmatic purposes. A designer can create an immaterial algorithm in an agent that acts on biological entities. There could be no direct evidence of such a thing, but the effects of it can be seen in the development of biological organisms.

    Again, everything is possible. I am not interested in what is possible, but in what is supported by facts.

    You use the word “algorithm” to indicate mental contents. I have nothing against that, but it is not the way I use it, and it is of no interest for ID theory.

    Again, ID theory is about inferring a design origin for some material objects. To do that, we are not interested in what happens in the consciousness of the designer, those are issues for a theory of the mind. We only need to know that the form we oberve in the object originated from some conscious, intelligent and purposeful agent who inputted that form to the object starting from some conscious representation. If the configuration comes directly from a conscious being, design is proved.

    All thhis discussion about algorithms is because some people here believe that the designer does not design biological objects directly, but rather designs some other object, probably biological, which then after some time, deisgne the new biological objects by aòlgorithmic computation programmed originally by the designer.

    IOWs, this model assumes that the designer designs, let’s call it so, a “biological computer” which then designs (computes) new biological beings.

    I have said many times that I don’t believe in this strange theory, and I have given my reasons to confute it.

    However, in this theory the algorithm is not a conscious agent who designs: it is a biological machine, IOWs an object. That’s why in this discussion I use algorithm to indicate an object that can compute. Again, the algorithm is designed, because it is a configuration given to a biological machine by the designer, a configuration that can make computations.

    If you want to know if a mental algorithm in a mind is designed, I cannot answer, because I am not discussion a theory of the mind here. Certainly, it is not designed according to my definition, because it is not a material object.

    ID theory is simple, when people don’t try to pretend that it is complicated. We observe some object. We observe the configuration of the object. We ask ourselves if the object is designed, IOWs is the configuration we observe originated as a conscious representation in a conscious agent, and was then inputted purposefully into the object. We define an objective property, functional information, linked to some function that can be implemented using the object and that can be measured. We measure it. If the complexity of the function that can be implemented by the object is great enough, we infer a design origin for the object.

    That’s all.

  129. 129
    gpuccio says:

    EugeneS:

    I remember the argument mentioned by Sal Cordova, but it seems that the original argument was made by Jonathan Wells (or maybe someone else before him).

    Here is an OP by V. J. Torley (the old VJT 🙂 ), defending the argument. It gives a transcript of the argument bt Wells.

    https://uncommondescent.com/intelligent-design/putting-humpty-dumpty-back-together-again-why-is-this-a-bad-argument-for-design/

    IMO. the argument is extremely strong. OOL theories imagine that in some way some of the molecules necessary for life originated. That some life was produced.

    The simple fact is: we cannot produce life in any way, even using all the available molecules and structures that are associated to life on our whole planet.

    The old fact is still a fact: life comes only from life.

    Even when Venter engineers his modified genomes, he must put them in a living cell to make them part of a living being.

    When scientists clone organisms, they must use living cells.

    You cannot make a living cell from inanimate matter, however biologically structured it is.

    And yet these people really belive that natural events did generate living cells, from completely unstructured inanimate matter!

    It is simply folly. I will tell you this: if it were not for the simple ideological necessity that “it must have happened without design, because ours is the only game in town”, no serious scientist would ever consider for a moment any of the current theories for OOL. As I have said, they are not even bad scentific theories, They are mere imagination.

  130. 130
    gpuccio says:

    Silver Asiatic:

    I mentioned Mozart’s symphonies which were designed in his conscious mind. They weren’t designed on paper or by musical instruments.

    No. According to the definitions I have given, and that I always use when discussing ID. Mozart’s symphonies were designed when he put them on paper. Before that, they were conscious representations, and not designed objects. As said, we are not discussing how conscious representations take form in consciousness. In ID we are interested only in the design of objects.

    Also, if an immaterial entity created other immaterial entities, you would say “that is not an act of purposeful design”?

    Again, that would not be design in the sense I have given, Indeed, that problem has nothing to do with ID theory. Immaterial entities do not have a configuration that can be observed, and therefore no functional information can be measured for them. ID theory is not appropriate for immaterial entities. It is about designed objects.

  131. 131
    gpuccio says:

    For all interested:

    About polar bears, and in support of Behe’s ideas:

    Population Genomics Reveal Recent Speciation and Rapid Evolutionary Adaptation in Polar Bears

    https://www.cell.com/cell/fulltext/S0092-8674(14)00488-7

    Genes Associated with White Fur

    A white phenotype is usually selected against in natural environments, but is common in the Arctic (e.g., beluga whale, arctic hare, and arctic fox), where it likely confers a selective advantage. A key question in the evolution of polar bears is which gene(s) cause the white coat color phenotype. The white fur is one of the most distinctive features of the species and is caused by a lack of pigment in the hair. We find evidence of strong positive selection in two candidate genes associated with pigmentation, LYST and AIM1 (Table 1). LYST encodes the lysosomal trafficking regulator Lyst. Melanosomes, where melanin production occurs, are lysosome-related organelles and have been implicated in the progression of disease associated with Lyst mutation in mice (Trantow et al., 2010). The types and positions of mutations identified in LYST vary widely, but Lyst mutant phenotypes in cattle, mice, rats, and mink are characterized by hypopigmentation, a melanosome defect characterized by light coat color (Kunieda et al., 1999, Runkel et al., 2006, Gutiérrez-Gil et al., 2007). LYST contains seven polar bear-specific missense substitutions, in contrast to only one in brown bear. One of these, a glutamine to histidine change within a conserved WD40-repeat containing domain, is predicted to significantly affect protein function (Figure 5B, Table S7). Three polar bear changes in LYST are located in proximity to the N-terminal structural domain and map close to human mutations associated with Chediak-Higashi syndrome, a hair and eyes depigmentation disease (Figure 5C). We predict that all these protein-coding changes, possibly aided by regulatory mutations or interactions with other genes, dramatically suppress melanin production and transport, causing the lack of pigment in polar bear fur. Variation in expression of the other color-associated gene, AIM1 (absent in melanoma 1), has been associated with tumor suppression in human melanoma (Trent et al., 1990), a malignant tumor of melanocytes that affects melanin pigment production.

    See also comments #75 and #112.

  132. 132
    ET says:

    Again- polar bears do NOT have white fur. That is elementary school level knowledge in Massachusetts. “Lack of pigmentation”? It’s a translucent hollow tube! Luminescence- when sunlight shines on it there is a reaction we call luminescence (another great word for sobriety check points). The skin is black.

    To claim that differential accumulation of genetic accidents, errors and mistakes just happened upon luminescence for polar bears, is extraordinary and without a means to test it. Count the number of specific changes already discussed and compare that to waiting for TWO mutations. You will see there isn’t enough time in the universe for Darwinian processes to pull it off.

  133. 133
    OLV says:

    GP @131:

    About polar bears, and in support of Behe’s ideas:
    Population Genomics Reveal Recent Speciation and Rapid Evolutionary Adaptation in Polar Bears

    Here’s another article also mentioning the cute polar bears:

    Genetic diversity of CHC22 clathrin impacts its function in glucose metabolism
    Matteo Fumagalli, Stephane M Camus, Yoan Diekmann, Alice Burke, Marine D Camus, Paul J Norman, Agnel Joseph, Laurent Abi-Rached, Andrea Benazzo, Rita Rasteiro, Iain Mathieson, Maya Topf, Peter Parham, Mark G Thomas, Frances M Brodsky

    eLife 2019;8:e41517 DOI: 10.7554/eLife.41517

    CHC22 clathrin plays a key role in intracellular membrane traffic of the insulin-responsive glucose transporter GLUT4 in humans. We performed population genetic and phylogenetic analyses of the CHC22-encoding CLTCL1 gene, revealing independent gene loss in at least two vertebrate lineages, after arising from gene duplication. All vertebrates retained the paralogous CLTC gene encoding CHC17 clathrin, which mediates endocytosis. For vertebrates retaining CLTCL1, strong evidence for purifying selection supports CHC22 functionality. All human populations maintained two high frequency CLTCL1 allelic variants, encoding either methionine or valine at position 1316. Functional studies indicated that CHC22-V1316, which is more frequent in farming populations than in hunter-gatherers, has different cellular dynamics than M1316-CHC22 and is less effective at controlling GLUT4 membrane traffic, altering its insulin-regulated response. These analyses suggest that ancestral human dietary change influenced selection of allotypes that affect CHC22’s role in metabolism and have potential to differentially influence the human insulin response.

     It is also possible that some forms of polar bear CHC22 are super-active at GLUT4 sequestration, providing a route to maintain high blood glucose, as occurs through other mutations in the cave fish (Riddle et al., 2018).

    Regulators of fundamental membrane traffic pathways have diversified through gene duplication in many species over the timespan of eukaryotic evolution. Retention and loss can, in some cases, be correlated with special requirements resulting from species differentiation

    The genetic diversity that we report here may reflect evolution towards reversing a human tendency to insulin resistance and have relevance to coping with increased carbohydrate in modern diets.

     
    And here’s another one;

    Environmental contaminants modulate the transcriptional activity of polar bear (Ursus maritimus) and human peroxisome proliferator-activated receptor alpha (PPARA)
    Heli Routti, Mari K. Berg, Roger Lille-Langøy, Lene Øygarden, Mikael Harju, Rune Dietz, Christian Sonne & Anders Goksøyr 

    Scientific Reports   volume 9, Article number: 6918 (2019)

    DOI: 10.1038/s41598-019-43337-w

    Peroxisome proliferator-activated receptor alfa (PPARA/NR1C1) is a ligand activated nuclear receptor that is a key regulator of lipid metabolism in tissues with high fatty acid catabolism such as the liver. Here, we cloned PPARA from polar bear liver tissue and studied in vitrotransactivation of polar bear and human PPARA by environmental contaminants using a luciferase reporter assay. Six hinge and ligand-binding domain amino acids have been substituted in polar bear PPARA compared to human PPARA. Perfluorocarboxylic acids (PFCA) and perfluorosulfonic acids induced the transcriptional activity of both human and polar bear PPARA. The most abundant PFCA in polar bear tissue, perfluorononanoate, increased polar bear PPARA-mediated luciferase activity to a level comparable to that of the potent PPARA agonist WY-14643 (~8-fold, 25??M). Several brominated flame retardants were weak agonists of human and polar bear PPARA. While single exposures to polychlorinated biphenyls did not, or only slightly, increase the transcriptional activity of PPARA, a technical mixture of PCBs (Aroclor 1254) strongly induced the transcriptional activity of human (~8-fold) and polar bear PPARA (~22-fold). Polar bear PPARA was both quantitatively and qualitatively more susceptible than human PPARA to transactivation by less lipophilic compounds.

    it should be kept in mind that polar bear metabolism is highly adapted to cold climate and feeding and fasting cycles, and direct comparison of physiological functions between polar bears and humans is thus challenging.

     
    Here’s an article about the brown bears that mentions the polar bear cousins too:

    Evolutionary history and palaeoecology of brown bear in North-East Siberia re-examined using ancient DNA and stable isotopes from skeletal remains
    Alba Rey-Iglesia, Ana García-Vázquez, Eve C. Treadaway, Johannes van der Plicht, Gennady F. Baryshnikov, Paul Szpak, Hervé Bocherens, Gennady G. Boeskorov & Eline D. Lorenzen 

    Scientific Reports   volume 9, Article number: 4462 (2019)

    DOI: 10.1038/s41598-019-40168-7

    The mtDNA of extant polar bears (Ursus maritimus), clade 2b, is embedded within brown bears and is most closely related to clade 2a, the ABC brown bears18.

     

  134. 134
    jawa says:

    Is it possible that the polar bears were affected by drinking so much Coca-Cola in TV commercials?
    🙂

  135. 135
    PeterA says:

    GP @129:

    Thanks for referencing the discussion about the Humpty Dumpty argument. Very interesting indeed.

  136. 136
    jawa says:

    If all of king’s horses and all of king’s men couldn’t put Humpty together again, who else can do it?
    🙂

  137. 137
    pw says:

    GP,

    I appreciate your answers at 107.
    Please, let me ask you another question:
    Why is there a drop in the black line in the last graphic in your OP? What does that mean? Loss of function?

  138. 138
    Silver Asiatic says:

    Gpuccio

    I responded to your statement:

    Science can investigate anything that produces observable facts.

    You then said:

    The correct answer is always the same: science can, and must, investigate, everything that can be observed in reality.

    Those two statements actually conflict with each other. You ask me to assume your meaning of various terms (as if the meaning is obvious) but in this case, I assume that your first statement is incorrect and you corrected it with the second.

    You are equivocating on the meaning of “creation”. Of course all acts of design are “creative” in a very general sense.

    I was using the general and ordinary meaning of the term “design”. Whatever is designed, even if using previously existing material, is an act of creation. If that which at one moment was inanimate matter, suddenly, by an act of an intelligent agent becomes a living organism – that is a creation. The designer created something that did not exist before. You limited the term creation to only those acts which are ex nihilo but that’s an artificial limit.

    ID science is not limited to the study of biology. ID also looks at the origin of the universe. In that case, ID is making a claim about the origin of time, space and matter. It is not limited to reconfigurations of existing matter.

    You quote me saying: “Indeed, ID is not evaluating anything about the designer…” and then you comment:
    As I quoted you above ” Science can investigate anything that produces observable facts”, why is not ID evaluating the designer?
    This is quote mining of the worst kind. The original statement was:
    ” Indeed, ID is not evaluating anything about the designer, except for what can be inferred by the observable effects of his interventions.”
    Shame on you.

    You’re trying to blame me for something here, but what you quoted did not answer the question. You avoided answering it when I asked about God also. You say that science can investigate anything that produces observable facts. You explain that by saying science can only make inferences from observable effects. As I said before, those two ideas contradict. In the first (bolded) you say that science can investigate “the producer” of the facts. You then shame me for asking why ID cannot investigate the designer by saying that ID can investigate the observable effects. As I said above, you corrected your first statement with the second – but you should not have blamed me for something that merely pointed to the conflict here.

    I’m not trying to trick or trap you or win anything. You make a statement that contradicts everything I had known about ID, as well as what contradicts science itself (that science can investigate anything that produces observations). I’m not really worried about your personal views on these things, I was just interested in what seemed to be a confused approach to the issue.

    The designer that we infer in ID is the designer of biological information.

    As above, the designer we refer to in ID is the designer of the universe, not merely of biological information. We infer something about the generation of consciousness. In fact, the immaterial quality of consciousness is evidence in support of ID. We look for the origin of that which we can observe.

    We infer nothing about the generation of consciousness (I don’t use the term design, because as I have explained I speak of design only for materila objects). As said, nobody here is trying to build a theory of consciousness.

    Mainstream evolution already assumes that consciousness is an evolutionary development. I have read ID researchers who have spoken about the irreducible quality of consciousness as evidence of design. Consciousness separates humans from non-human animals. Evolutionary theory offers an explanation, and ID (not your version of ID but others) offers an opposing one.

  139. 139
    OLV says:

    More on the cute polar bears:

    Polar bear evolution is marked by rapid changes in gene copy number in response to dietary shift
    David C. Rinker, Natalya K. Specian, Shu Zhao, and John G. Gibbons

    PNAS July 2, 2019 116 (27) 13446-13451;  
    DOI: 10.1073/pnas.1901093116

     
    Copy number variation describes the degree to which contiguous genomic regions differ in their number of copies among individuals. Copy number variable regions can drive ecological adaptation, particularly when they contain genes. Here, we compare differences in gene copy numbers among 17 polar bear and 9 brown bear individuals to evaluate the impact of copy number variation on polar bear evolution. Polar bears and brown bears are ideal species for such an analysis as they are closely related, yet ecologically distinct. Our analysis identified variation in copy number for genes linked to dietary and ecological requirements of the bear species. These results suggest that genic copy number variation has played an important role in polar bear adaptation to the Arctic.

    Polar bear (Ursus maritimus) and brown bear (Ursus arctos) are recently diverged species that inhabit vastly differing habitats. Thus, analysis of the polar bear and brown bear genomes represents a unique opportunity to investigate the evolutionary mechanisms and genetic underpinnings of rapid ecological adaptation in mammals. Copy number (CN) differences in genomic regions between closely related species can underlie adaptive phenotypes and this form of genetic variation has not been explored in the context of polar bear evolution. Here, we analyzed the CN profiles of 17 polar bears, 9 brown bears, and 2 black bears (Ursus americanus). We identified an average of 318 genes per individual that showed evidence of CN variation (CNV). Nearly 200 genes displayed species-specific CN differences between polar bear and brown bear species. Principal component analysis of gene CN provides strong evidence that CNV evolved rapidly in the polar bear lineage and mainly resulted in CN loss. Olfactory receptors composed 47% of CN differentiated genes, with the majority of these genes being at lower CN in the polar bear. Additionally, we found significantly fewer copies of several genes involved in fatty acid metabolism as well as AMY1B, the salivary amylase-encoding gene in the polar bear. These results suggest that natural selection shaped patterns of CNV in response to the transition from an omnivorous to primarily carnivorous diet during polar bear evolution. Our analyses of CNV shed light on the genomic underpinnings of ecological adaptation during polar bear evolution.

  140. 140
    gpuccio says:

    ET:

    “Again- polar bears do NOT have white fur. That is elementary school level knowledge in Massachusetts.”

    OK, we have no polar bears here in Italy, so I cannot share your expertise! 🙂

    So, I read a little about the issue.

    Polar bear’s fur is hollow and lacks any pigment. Indeed, it is rather transparent. The white color is due to optical effects. And the skin is black, as you say.

    Brown bears has a fur that is solid and pigmented.

    OK, what does that mean?

    First of all, let’s say that the fact that the fur is not really white is not important in relation to the supposed selection of white in polar animals, because indeed polar bears appear white, so to the purpose of the supposed positive selcetion there is no real difference.

    But that is not the real point, I would say.

    The real point is: what is the mechanism of the divergence between brown bears and polar bears? The paper I mentioned puts the split at about 500000 years ago, that is not much. Some give a few million years. Whatever, it is certainly a rather recent event in evolutionary history.

    So, can the divergence be explained by neo-darwinian mechanisms, or is it the result of design? Or of some biological algorithm embedded in the common ancestor?

    The paper I mentioned of course has a neo-darwinian answe, but that could hardly be different.

    Behe thinks that this can be a case of darwinian “devolution”: differentiation through loss of function which goves some environmental advantage.

    You are definitely in favor of design (or an adaptation algorithm, I am not sure).

    Who is right?

    I think this is a case that shows clearly how ID theory is necessary to give good answers to that kind of problems.

    IOWs, we can answer only if we can evaluate the functional complexity of the divergence.

    The problem is that I cannot find any appropriate data in all the source that have been mentioned, or that I could find in my brief search, to do that. Why? Because nobody seems to know the molecular basis for the difference in fur structure and pigmentation. And it is not completely clear how functionally important the polar bear fur structure is, even if it is generally believe that it is under positive selection, therefor somehow functional in the appropriate environment.

    If you have some better data, please let me know.

    Of course, fur is not the only difference, but for the moment let’s focus on that.

    So, from an ID point of view, we have different possible scenarios, if we could measure the functional information behind the difference in fur structure and pigmentation.

    To safely infer design according to the classic procedure, we need some function that implies more than 500 bits of functional information.

    However, as we are dealing here with a population (bears) rather limited in number and slow-reproducing, and with a rather short time window, I would be more than happy with 150 bits of functional information to infer design in this case.

    The genomic differences highlighted in the paper I quoted seem to be rather simple. Most of them can be interpreted as one aminoacid mutations with loss of function, perfectly in the range of neo-darwinism and of Behe’s model. But I have no idea if those simple genetic differences are enough to explain what we observe. The lack of pigmentation is probably easier to explain. For the hollow structure, I have no ideas.

    The problem is: we have to know the molecular basis, otherwise no computation of functional information can be made. Because, as we know, there are sometimes big morphological differences that have a vert simple biological explanation, and vice versa. So again, I must ask: have you any data about the molecular foundation of the differences?

    In the meantime, I would say that yhe scenarios are:

    1) The differences can be explained by one or more independent mutations affecting functions already present. Or, at most, 2 or 3 coordinated mutations where each one affects the same function in a relevant way, so that NS could intervene at each step (IOWs a simple tweaking pathway of the loss of function, as we see for example in antibiotic resistance). These scenarios are in the range of what RV + NS could in principle do, maybe even in a population like bears. In this case, I would accept a neo-darwinian mechanism as a reasonable explanation, until different data are discovered.

    2) The differences imply a gain in functional information of 150+ bits. We can safely infer design. Polar bears were designed, some time about 400000 years ago, or a little more.

    3) The differences imply something between 12 bits (3 AAs) and 150 bits. In this case, It would be wise to remain cautious. It is not the best scenario to infer design, even if it is rather unlikely for a neo-darwinian mechanism in that kind of population. Maybe some simple active adaptation algorithm embedded in brown bears could be considered. But such an algorithm should be in some way detailed and shown to be there, not only imagined.

    IMO, this is how ID theory works. Through facts, and objective measurements of functional information. There is no other way.

    Just a final note about the “waiting for two mutations” paper. That is of course a very interesting article. But it is about two coordinated mutations needed to generate a new function, none of which individually confers any advantage. IOWs, this is more or less the scenario of chloroquine resistance, again linked to Behe.

    I agree that such a scenario, even if possible, is extremely unlikely in a population like bears. But the simple fact is that almost all the variations considered by Behe in his reasonings about devolution are very simple. One mutation is often enough to lose a function. One frameshift mutation can inactivate a whole protein, losing maybe thousands of bits of functional information. And we can have a lot of such individual independent mutations in a population like bears in 400000 years.

    So, unless we have better data on the functional information involved in the transition to polar bears, I suspend any judgement.

  141. 141
    gpuccio says:

    Jawa at #134:

    “Is it possible that the polar bears were affected by drinking so much Coca-Cola in TV commercials?”

    Absolutely!

    Let’s wait: if I develop translucent fur in the next few years, that will be a strong argument in favour of your hypothesis! 🙂

  142. 142
    ET says:

    1- Bears with actual white fur exist

    2- There are grizzly (brown) bears with actual white fur. They are not polar bears.

    3- I am looking at the number of specific mutations it would take to get a polar bear from a common ancestor with brown bears. That would tell me if blind and mindless processes are up to the task. The paper gpuccio provided gives us a hint and it already goes against blind and mindless processes.

  143. 143
    gpuccio says:

    Pw at #137:

    “Why is there a drop in the black line in the last graphic in your OP? What does that mean? Loss of function?”

    You mean the small drop in amphibians in the blue line (BCL10)?

    Yes, that kind of pattern can be observed often enough, usually in one or two classes.

    The strict meaning is that the best homology hit in that class was lower than in the older class.

    Here the effect is small, but sometimes we can see a whole unexpected drop in one class of organisms, while the general pattern is completelt consistent in all the other ones.

    Technically, we are speaking of human conserved information. That’s what is measured here.

    Probably, it is a loss of function in relation to that protein in that class. That is perfectly compatible with Behe’s concept of devolution. That form of the protein seems somtemise to be completely lacking in one class.

    In some cases, it could also be a technical error in the databases, or in the blast algorithm. We can expect that, it happens. Some of the classes I have considered are more represented in the databases, some less. However, if one proteins lacks any relevant homology in one class in my graphic, that means that none of the organisms in that class showed any relevant homology, because I always consider the best hit among all the proteins of all the organisms os that class included in the Ncbi databases.

  144. 144
    gpuccio says:

    ET at #142:

    Thank you for the further clarifications about bears. You are really an expert! 🙂

    However, it is not really the numser of specific mutations that counts. It is the number of coordinated mutations necessaty to get a function, none of which has any functional effect alone. There is a big difference. I have tried to explain that at #140.

  145. 145
    ET says:

    Thank you, gpuccio. We have a little impasse as I think it is the number of specific mutations and the functions are all the physiological changes afforded by them.

    In his book “Human Errors”, Nathan Lents tells us that it is highly unlikely that one locus will receive another mutation after already getting mutated. And yet it has the same probability for change as any other site. So it looks like evolutionists are talking about the probability of a specific mutation happening regardless of function.

    As for bears- living in Massachusetts I run into black bears all of the time. They come up on my deck at night. I have photos of them in my yard. And being a dog-person I have a keen interest. That’s all- I think they are really cool animals.

  146. 146
    gpuccio says:

    ET:

    Thanks to you! 🙂

    I suspected you had some special connection with bears! I am more a cat guy, but I do understand love and interest for all animals. 🙂

  147. 147
    jawa says:

    ET,
    The Massachusetts bears may be cool animals, but didn’t get hired for Coca-Cola TV ads like their polar cousins. 🙂

  148. 148
    gpuccio says:

    Silver Asiatic at #138:

    I responded to your statement:

    Science can investigate anything that produces observable facts.

    You then said:

    The correct answer is always the same: science can, and must, investigate, everything that can be observed in reality.

    Those two statements actually conflict with each other. You ask me to assume your meaning of various terms (as if the meaning is obvious) but in this case, I assume that your first statement is incorrect and you corrected it with the second.

    Oh, good heavens! That’s what happens when someone (you) discusses not to understand and be understood, but just to generate confusion. You are of course equivocating on the word “investigate”.

    Maybe the second from is more precise, but the meaning is the same.

    However, let’s clarify, for those who can be confused by your playing with words.

    Science always starts from facts: what can be observed.

    But science tries to explain facts building theories (maps of reality). Those theories need not include only what is observable. They just need to explain observed facts. For example, most scientific theories are based on mathematics, which is not something observable.

    Another example. Most theories in empirical science are about possible relationships of cause and effect. But the relarionship of cause and effect is not something that can be observed.

    My error was probably to use the word “investigate”, which was ambiguous enought to allow you to play with it.

    OK, let’s say that science can build hypotheses only to explain observed facts, but of course those hypotheses, those maps of reality, can include any cognitive content, if it is appropriate to the explanation.

    The word “evaluate” can refer of course both to the gathering of facts and to the building of theories.

    My original statement was.

    ” Indeed, ID is not evaluating anything about the designer, except for what can be inferred by the observable effects of his interventions.”

    Wasn’t it clear enough for you?

    I was using the general and ordinary meaning of the term “design”. Whatever is designed, even if using previously existing material, is an act of creation. If that which at one moment was inanimate matter, suddenly, by an act of an intelligent agent becomes a living organism – that is a creation. The designer created something that did not exist before. You limited the term creation to only those acts which are ex nihilo but that’s an artificial limit.

    The problem here is not the meaning of the word design, but the meaning of the word creation. The word creation here, in this blog and I would say in the whole debate about ID and more, is used in the sense of “creation ex nihilo”, something that only God can do. Why do you think that our adversaries (maybe you too) call us “creationists” and not “designists”?

    It’s strange that one like you, that has been coming here for some time, is not aware of that, and suddenly inteprets “creation” in this debate as a statement about a movie or a book.

    However, the problem is not the meaning of words, For that, it’s enough to clarify what we mean. Clearly, and without word plays.

    More in next post.

  149. 149
    jawa says:

    GP @141:

    But even in the case where you would develop translucent fur, I hope you’ll keep writing OPs for us here, right?

    🙂

  150. 150
    john_a_designer says:

    Gpuccio and Silver Asiatic,

    A few of my thoughts about the relationship between science, philosophy, theology and religion.

    Creationism is based on a religious text– the Jewish-Christian scriptures. ID, on the other hand, is at the very least a philosophical inference from the study of nature itself.

    Even materialists recognize the possibility that nature is designed. Richard Dawkins, for example, has argued that “Biology is the study of complicated things that give the appearance of having been designed for a purpose.”

    He then goes on to argue that it is not designed.

    So what is Dawkins argument? Let’s try out his quote as the main premise in a basic logical argument.

    Premise 1: “Biology is the study of complicated things that give the appearance of having been designed for a purpose.”

    Premise 2: Dawkins (a trained zoologist) believes that “design” is only an appearance.

    Conclusion: Therefore, nothing we study in the biosphere is designed.

    The conclusion is based on what? Are Dawkin’s beliefs and opinions self-evidently true? Is the science settled as he suggests? If the answer for those two questions is no (Dawkin’s arguments BTW are by no means conclusive) then what is the reason for not looking at living systems that have “the appearance of having been designed for a purpose?” Couldn’t they really have been designed for a purpose? That is a basic justification for ID. It begins from a philosophical neutral position (that some things could really be designed) whereas a committed Darwinian like Dawkins, along with other “committed” materialists, begins with the logically fallacious assumption that design is impossible.

  151. 151
    gpuccio says:

    Silver Asiatic at #138:

    ID science is not limited to the study of biology. ID also looks at the origin of the universe. In that case, ID is making a claim about the origin of time, space and matter. It is not limited to reconfigurations of existing matter.

    That’s correct. The cosmological argument, especially in the form of fine tuning, is certainly part of the ID debate.

    But here I have never discussed the cosmological argument in detail. I think it is a very good argument, but many times I have said that it is different from the biological argument, because it has, inevitably, a more philosophical aspect and implication.

    I have always discussed the biological argument of ID here, and it is also the main object of discussion, I belieev, since the ID movement started. Dembski, Behe, Meyer, Abel, Berlinski and others usually refer mainly to the biological argument. So I apologize if that created some confusion: all that I say about ID is referred to the biological argument. And biological design always happens in space and time.

    You’re trying to blame me for something here, but what you quoted did not answer the question. You avoided answering it when I asked about God also. You say that science can investigate anything that produces observable facts. You explain that by saying science can only make inferences from observable effects. As I said before, those two ideas contradict. In the first (bolded) you say that science can investigate “the producer” of the facts. You then shame me for asking why ID cannot investigate the designer by saying that ID can investigate the observable effects. As I said above, you corrected your first statement with the second – but you should not have blamed me for something that merely pointed to the conflict here.

    As I have explained, there is no conflict at all. Of course the word “investigate” refers both to the analysis of facts and to the building of hypotheses. Every action of the mind in relation to science is an “investigation” and an “evaluation”, IOWs a cognitive activity in search of some truth about reality.

    I think I have been clear enough at #128:

    “The correct answer is always the same: science can, and must, investigate, everything that can be observed in reality. And, from observed things, infer ever better models of reality. Given that very broad definition, there are no other limitations.”

    That should be clear, even to you. There are no limitations. If a concept of god were necessary to build a better scientific model of reality that explains observed things, there is no problem: god can be included in that model.

    But I refuse, and always will refuse, in a scientific discussion, to start from some philosophical or religious idea of God and allow, without any conscious resistance on my part, that such idea influence my scientific reasoning. Science should work, or try to work, independently from any pre-conceived worldview. If scientific reasonings brings to the inclusion, or to the exclusion, of God in a good map of reality, scientific reasoning should follow that line of thought and impartially test it. The opposite is not good, IMO.

    I hope that’s clear enough.

    I’m not trying to trick or trap you or win anything. You make a statement that contradicts everything I had known about ID, as well as what contradicts science itself (that science can investigate anything that produces observations). I’m not really worried about your personal views on these things, I was just interested in what seemed to be a confused approach to the issue.

    Neither am I. I am trying to clarify. When I don’t understand well what my interlocutor is saying, I ask. When they ask me, I answer. That’s the way.

    It’s strange that my statements contradict everything you have known of ID. My application of the ID procedure for design inference is very standard, maybe with some more explicit definition. About God, an issue that I never discuss here for the reasons I have given, it is rather clear that all the official ID movement unanimously states that the design inference from biology tells nothing about God. Indeed, ID defenders are usually reluctant to tell anything about the biological designer.

    I want to clarify well my position about that, even if I have been explicit many times here.

    1) I absolutely agree with the idea that there is no need to say anything about the designer to make a valid design inference. This is a pillar of the ID thoughtm and it is perfectly correct. I ofetn say that the designer can only be describet as some conscious, intelligent and purposeful agent. But that is implicit in the definition of design, it is not in any way something we infer about any specific designer.

    2) That said, I have always been available here, maybe more than other ID defenders, to make reasonable hypotheses about the biological designer in the measure that those hypotheses can be reasonably driven from known facts. That’s what I have done at #100 and #101, trying to answer a number of questions that you had asked. I know very well that trying to reason scientifically about those issues is always a sensitive matter, both for those in my filed and for those in the other. Or maybe just in.between. But I do believe that science must pursue all possible avenues of thought, provided that we always start form observable facts and are honest in building our theories.

    Knowing that, I have also added, at the end of post #101:

    “That’s the best I can do to answer your questions. Just a note: my answers here are highly tentative, but they are strictly empirical. They do not derive from any theological agenda. These are the ideas that, in my opinion, are more reasonable to explain the facts as we know them. Right or wrong that they may be, this is the spirit in which I express them.”

    I can only repeat my statement: That’s the best I can do to answer your questions.

    More in next post.

  152. 152
    Silver Asiatic says:

    GP

    My error was probably to use the word “investigate”, which was ambiguous enought to allow you to play with it.

    I wasn’t “playing” with it. I was helping you clarify your statement. I’m not trying to say gotcha. I sincerely thought you believed that science could investigate (directly evaluate, measure, analyze) anything (like God) that produces observable facts.
    I kept in mind that you said that science is not limited by matter. I’d conclude from that a belief that science can investigate (evaluate, analyze, measure, observe, describe) immaterial entities. You cited a philosophy of science to support that view. How am I supposed to know what you are thinking of? I asked you if science could “investigate” God, but you didn’t want to answer that.

    Again, normally IDists would not say that science can Directly investigate, evaluate, analyze, measure or describe immaterial entities. You seem to disagree with that.

    Indeed, ID is not evaluating anything about the designer, except for what can be inferred by the observable effects of his interventions.

    Evaluation is not the gathering of facts. Collecting facts comes from observation, measurement, or investigation. Evaluation can create some facts (such as logical conclusions) but in science it all must start with observation. After that, we can evaluate. To infer is to draw a logical conclusion from observations and evaluation.

    As I have heard other ID theorists state, ID cannot observe anything about an immaterial designer or designers. I think you disagree with this. The only thing ID attempts to do is show that there is evidence of Intelligence at work. The effects that we observe in nature could have been produced by millions of designers, each one of which has less intelligence than a human being, but collectively create design in nature. If you are speaking about a designer that exists outside of space and time, then we do not have any experience with that.
    We can observe various effects, but not the entity itself.
    It seemed that you disagree with this and believe instead that science can directly observe an immaterial designer (or any immaterial entity) that produces effects in reality.

  153. 153
    gpuccio says:

    Silver Asiatic at #138:

    Let’s see your last statements.

    As above, the designer we refer to in ID is the designer of the universe, not merely of biological information.

    That’s not correct. As said, the inference of a designer for the universe, and the inference of a biological designer are both part of ID, but they are different and use completely different observed facts. Therefore, even if both are correct (which I do believe), there is no need that the designer of the universe is the same designer as the designer of biological information. I don’t follow your logic.

    We infer something about the generation of consciousness.

    ??? Again, I can’t follow you. Who is “we”? I am not aware that ID, especially in its biological form, but probably also in the cosmological form, is inferring anything about “the generation of consciousness”. Why do you say that?

    In fact, the immaterial quality of consciousness is evidence in support of ID.

    No. Big epistemological errors here. Consciousness is a fact, because we can directly observe it. Being a fact, anyone can use its existence as evidence for what one likes.

    But “the immaterial quality of consciousness” is a theory, not a fact. It’s a theory that I accept in my worldview and philosophy, but I would not say that we have incontrovertible scientific evidence for it. Maybe strong scientific evidence, at best. But the important point is: a theory is not a fact. It is never evidence of anything. A theory, however good, needs the support of facts as evidence. it is not evidence for other theories. At most, it is more or less compatible with them.

    We look for the origin of that which we can observe.

    Correct, and as consciousness can be observed, it is perfectly reasonable to look for some scientific theory that explains its origin. But that theory is not ID. As I have said, ID is not a theory about the origin of consciousness. It is a theory that says that conscious agents are the orign of designed objects. I believe that you can see the difference.

    Mainstream evolution already assumes that consciousness is an evolutionary development.

    Mainstream evolution assumes a lot of things. Most of them are wrong. And so?

    I have read ID researchers who have spoken about the irreducible quality of consciousness as evidence of design.

    Who? Where? As far as I know, complex specified information (or complex functional information) in objects has always been considered the mark of design. Dembski, Behe, Abel, Meyer, Berlinski, and so on.

    Consciousness separates humans from non-human animals.

    ??? Why do you say that? I believe that a cat or a dog are conscious. And I think that most ID thinkers would agree.

    Ask ET about nears! 🙂

    Evolutionary theory offers an explanation, and ID (not your version of ID but others) offers an opposing one.

    An explanation for what? For the origin of consciousness? But what ID sources have you been perusing?

    One of the most famous ID icons is the bacterial flagellum, since Behe used it to explain the concept of irreducible complexity (a concept linked to functional complexity). Is that an explanation of human consciousness? I can’t see how.

    Meyer has written a whole book about OOL and a whole book about the Cambrian explosion. Are those theories about the origin of human consciousness?

    Of course ID thinkers certainly believe that some special human functions, like reason, are linked to the specific design of humans. But it is equally true that the special functions of bacteria (like the CRISPR system) are certainly linked to the specific design of bacteria. The desing inference is perfectly valid in both cases.

    But consiousness is not “a function”. It is much more. It is a component of reality that we cannot in any way explain by objective configurations of external things. ID is not a theory of consciousness.

  154. 154
    gpuccio says:

    Jawa at #149:

    Maybe translucent OPs. 🙂

  155. 155
    Silver Asiatic says:

    JAD

    ID, on the other hand, is at the very least a philosophical inference from the study of nature itself.

    It’s a complicated issue and I can see where you are going with this. At the same time, I think many prominent IDists will say that ID is not a philosophical inference. It’s a scientific inference from what science already knows about the power of intelligence. So, something is observed that appears to be the product of intelligent design, then science evaluates the probability that it came from natural causes. If that probability is too remote, intelligent design becomes the best answer since we know that intelligence can design things like that which has been observed.

    On the other hand, with your view, there are different philosophical starting points for both ID and Dawkins. So, depending on what we mean it may be correct to say that ID is really a philosophical inference. It’s a different philosophy of science than that of Dawkins. I think Dembski and Meyer would disagree with this. They have attempted to show that ID uses exactly the same science as Dawkins does.

  156. 156
    gpuccio says:

    John_a_designer at #150:

    I agree with what you say. I just want to clarify that:

    1) IMO Dawkin’s biological arguments are very bad, but at least they are a good incarnation of true neo-darwinism, thereofre easy to confute. In that sense, he is better than many post-post-neo-darwinists, whose thoughts are so ethereal that you cannot even catch them! 🙂

    2) On the contrary, Dawkin’s philosohical arguments are arrogant, superficail and ignorant. Unbearable. He should stick to being a bad thinker about biology.

    3) To be fair to Dawkins, I don’t think that he assumes that “design is impossible”. On the contrary, he is one of the few who admit that design could be a scientific explanation. He just does not accept it as a valid scientific explanation. That is epistemologically correct, even if of course completely wrong in the essence.

  157. 157
    Silver Asiatic says:

    GP

    ??? Again, I can’t follow you. Who is “we”? I am not aware that ID, especially in its biological form, but probably also in the cosmological form, is inferring anything about “the generation of consciousness”. Why do you say that?

    Your use of multiple question-marks and the personal digs (“even you can understand”) indicate to me that this conversation is getting too heated. You apologized previously, so thank you. I’ll also apologize for the tone of my remarks.

    You asked about ID and consciousness:

    Yet the adequacy of matter to generate agency (or apparent agency) is fundamental to both the problem of consciousness and the problem of the origins of biological complexity. If immaterial explanations are necessary to explain the agency inherent to the mind, then the view that immaterial explanations are necessary to explain the agency apparent in living things gains considerable traction.
    https://evolutionnews.org/2008/12/consciousness_and_intelligent/

    Michael Egnor writes about consciousness as evidence supporting ID. I think here, BornAgain77 often posts resources that support this concept. I understand that your interest is in biological ID, and therefore limited to biological designer or designers.

    You answered my questions adequately. Again, I appreciate your comments and I apologize for any misunderstandings that may have arisen in this conversation.

  158. 158
    jawa says:

    Richard Dawkins’ books should be in the “cheap philosophy” section of bookstores. But instead they have them in the Science section.
    Specially after Professor Denis Noble has discredited them. Bizarre.

  159. 159
    gpuccio says:

    Silver Asiatic at #152:

    I wasn’t “playing” with it. I was helping you clarify your statement.

    Well, I hope I have clarified it. Thank you for the help.

    Again, normally IDists would not say that science can Directly investigate, evaluate, analyze, measure or describe immaterial entities. You seem to disagree with that.

    Well, it seems that I have not clarified enough. Please, read again what I have written. Here are some more clues:

    1) “investigate, evaluate, analyze, measure or describe” are probably too many different words. I quote myself:

    “But science tries to explain facts building theories (maps of reality). Those theories need not include only what is observable. They just need to explain observed facts. For example, most scientific theories are based on mathematics, which is not something observable.

    Another example. Most theories in empirical science are about possible relationships of cause and effect. But the relarionship of cause and effect is not something that can be observed.

    My error was probably to use the word “investigate”, which was ambiguous enought to allow you to play with it.”

    So, again. Science starts with facts: what can be observed. “Measures” are only made on what can be observed. I suppose that all your fancy words can apply to our interaction with facts:

    – When we gather facts and observe their properties, it can be said, I suppose, that we are “investigating” facts, and “analyzing” them. And “eveluating” them or “describing” them. And of course taking measures is part of observing facts.

    – When we build theories to explain observed facts, not all those terms apply. For example, let’s say that we hypothesize a cause and effect relationship. That is part of our theory, but we don’t take measures of the cause-effect relationship. At most, we infer it from the measures we have taken of facts. But in a wide sense building a theory can be considered an evaluation, certainly it is a form of investigation.

    I have said clearly that we can use any possible concept in our theories, provided that the purpose is to explain facts. We use the cause-effect relationship, we use complex numbers in quantum mechanics, we can in principle use the concept of God, if useful. Or of immaterial entities. That does not mean that we can measure those things, or have further information about them except for what can be reasonably inferred from facts.

    That should be clear, but I don’t know why I will not be suprised if again you don’t understand.

    Evaluation is not the gathering of facts. Collecting facts comes from observation, measurement, or investigation. Evaluation can create some facts (such as logical conclusions) but in science it all must start with observation. After that, we can evaluate. To infer is to draw a logical conclusion from observations and evaluation.

    As you like. As said, it’s not a problem about words. You want to limit “evaluation” in some, not very clear to me, way, be my guest. I will simply avoid the word with you.

    But please, note that logical conclusions are not facts. If you insist on that kind of epistemology, we cannot really communicate.

    As I have heard other ID theorists state, ID cannot observe anything about an immaterial designer or designers. I think you disagree with this.

    No. Why should I? Of course if a thing is immaterial it cannot be “observed”. The only exception is our personal consciousness, that each of us observes directly, intuitively.

    I have only said that we can use the concept of immaterial entoities in our theories, and that we can make inferences about the designer from observed facts, be he material or immaterial.

    The only thing ID attempts to do is show that there is evidence of Intelligence at work.

    Of intelligent designers.

    The effects that we observe in nature could have been produced by millions of designers, each one of which has less intelligence than a human being, but collectively create design in nature.

    I absolutely disagree. ATP synthase could never have been designed by a crowd of stupid designers. It’s the first time I hear such a silly idea.

    If you are speaking about a designer that exists outside of space and time, then we do not have any experience with that.

    I have never said that. I have said many times that the designer acts in space and time. Where he exists, I really don’t know. Have you some information about that?

    We can observe various effects, but not the entity itself.

    That’s right. Like dark energy or dark matter. As for that, we cannot even observe conscious representations in anyone else except us, but still we very much base our science and map of reality on their effects and the inference tha thy exist.

    It seemed that you disagree with this and believe instead that science can directly observe an immaterial designer (or any immaterial entity) that produces effects in reality.

    This is only your unwarranted misinterpretation. I have said many times that science can directly observe some effects and infer a designer, maybe immaterial. It’s exactly the other way round.

  160. 160
    gpuccio says:

    Silver Asiatic at #157:

    OK, I apologize too. Multiple question marks are not intended as an offense, only as an expression of true amazement. Some other statements may have been a little more “heated”, as you say. Let’s try to be more detached. 🙂

    I have just finished commenting on your statements. Please, forgive any possible question marks or tones. My purpose is always, however, to clarify.

    I am afraid that Egnor and BA are not exactly my main reference for ID theory. I always quote my main references:

    Dembski (with whom, however, I have sometimes a few problems, but whose genius and importance for ID theory cannot be overestimated)

    Behe, with whom I agree (almost) always.

    Abel, who has given a few precious intuitions, at least to me.

    Berlinsky, who has entertained me a lot with creative and funny thoughts.

    Meyer, who has done very good work about OOL and the Cambrian explosion.

    And, of course, others. Including many friends here. Let me quote at least KF and UB for the many precious contributions, but of course there are a lot more, and I hope nobody feels excluded: it would be a big work to give a coherent list.

  161. 161
    john_a_designer says:

    SA,

    Science itself rests on a number of empirically unprovable or metaphysical (philosophical) assumptions. For example:

    That we exist in a real special-temporal world– that the world (the cosmos) is not an illusion and we are not “brains in a vat” in some kind of Matrix like virtual reality.

    That the laws of nature are universal throughout time and space.

    Or that there are really causal connections between things and things, people and things. David Hume famously argued that that wasn’t self-evidently true. Indeed, in some cases it isn’t. Sometime there is correlation without causation or “just coincidence.”

    Again, notice the logic Dawkins wants us to accept. He wants us to implicitly accept his premise that that living things only have the appearance of being designed. But how do we know that premise is true? Is it self-evidently true? I think not. Why can’t it be true that living things appear to be designed for a purpose because they really have been designed for a purpose? Is that logically impossible? Metaphysically impossible? Scientifically impossible? If one cannot answer those questions then design cannot be eliminated from consideration or the discussion. Therefore, it is a legitimate inference from the empirical (scientific) evidence.

    I have said this here before, the burden of proof is on those who believe that some mindless, purposeless process can “create” a planned and purposeful (teleological) self-replicating system capable of evolving further though purposeless mindless process (at least until it “creates” something purposeful, because, according to Dawkins, living things appear to be purposeful.) Frankly, this is something our regular interlocutors consistently and persistently fail to do.

    As a theist I do not claim I can prove (at least in an absolute sense) that my world view is true. Can naturalists/ materialists prove that their world view is true? Personally I believe that all worldviews rest on unprovable assumptions. No one can prove that their world view is true. Is that true of naturalism/ materialism? If it can someone with that world view needs to step forward and provide the proof.

    As whether or not ID is science. I am skeptical of the claim that Darwinism in the macro-evolutionary sense is science or that SETI is science (what empirical evidence is there that ETI’s exist?) How does NS + RV cause macro-evolutionary change? Science, needs to answer the question of how. Just saying “oh somehow it could” with any airy wave of the hand is not a sufficient explanation. But that applies for people on both sides of the debate.

  162. 162
    Silver Asiatic says:

    SA: I have read ID researchers who have spoken about the irreducible quality of consciousness as evidence of design.
    GP: Who? Where? As far as I know, complex specified information (or complex functional information) in objects has always been considered the mark of design. Dembski, Behe, Abel, Meyer, Berlinski, and so on.

    ID and Neuroscience
    https://uncommondescent.com/intelligent-design/id-and-neuroscience/
    My good friend and colleague Jeffrey Schwartz (along with Mario Beauregard and Henry Stapp) has just published a paper in the Philosophical Transactions of the Royal Society that challenges the materialism endemic to so much of contemporary neuroscience. By contrast, it argues for the irreducibility of mind (and therefore intelligence) to material mechanisms.
    William Dembski

  163. 163
    Silver Asiatic says:

    “CSI is a reliable indicator of design” — William Dembski
    “it is CSI on which David Chalmers hopes to base a comprehensive theory of human consciousness.” — William Dembski

    https://www.asa3.org/ASA/PSCF/1997/PSCF9-97Dembski.html

  164. 164
    Silver Asiatic says:

    JAD

    Science itself rests on a number of empirically unprovable or metaphysical (philosophical) assumptions.

    Agreed. Science does not stand alone as a self-evident process. It is dependent upon philosophical assumptions. Dawkins has his own assumptions. If he said, for example, that science can only accept material causes for all of reality, that is just his philosophical view. If ID says that science can accept immaterial causes, then it is different science.
    A person might also say that science must accept that God exists. That’s a philosophical starting point.
    In the end, people who do science are carrying out a philosophical project.
    If a person is willing to do enough philosophy to carry out the project of science, I believe they have the responsibility to carry the philosophy farther than science. The philosophical questions go beyond simply what causes we can accept.
    But people like Dawkins and others do not accept this. They think that science simply has one set of rules, and they claim to be the ones following the true scientific rules, as if those rules always existed.
    Some IDists have tried to convince the world that ID is just following the normal, accepted rules of science and that people do not need to accept a new kind of science in order to accept ID conclusions.
    Others will say that mainstream science itself is incorrect and that people need a different kind of science in order to understand ID.
    I think ID will even work with Dawkins’ version of science. He may say that “only material causes” can be considered. So, we observe intelligence and so some material cause created the intelligent output? The question for Dawkins would be what material cause creates intelligent outputs?

  165. 165
    gpuccio says:

    Silver Asiatic:

    Theory of consciousness is a fascinating issue. A philosophical issue which, like all philosophical issues, can certainly use some scientific findings. I have my ideas about theory of consciousness, and sometimes I have discussed some of them here. But ID is not a theory of consciousness.

    But it is true that ID is the first scientifc way to detect something that only conciousness can do: generate complex functional information. In this sense, the results of ID are certainly important to any theory of consciousness. The simple fact that there is something that only consciousness can do, and that there is a scientific way to detect it, is certainly important. It also tells us that consciousness can do things that no non comscious algorithm, however intelligent or complex, can do.

    I usually say that some properties of conscious experiences, like the experience of understanding meaning and of feeling purposes, are the best rationale to explain why conscious agents can generate complex functional information while non conscious systems cannot. But again, ID is not a theory of consciousness.

    All spheres of human cognition are interrelated: religion, philosophy, science, art, everything. But each of those things a specificity.

    ID theory will probably be, in the future, part of a theory of consciousness, if and when we can develop a scientific approach to it. But at present it is only a theory about how to detect a specific product of consciousness, complex functional information, in material objects.

    Jeffrey Schwartz and Mario Beauregard are neuroscientists who have dealed brilliantly with the problem of consciousness. the spiritual brain is a very good book. Chalmers is a philosopher who has given us a precious intuition with his concept of the hard problem of consciousness.

    None of those approaches, however, is even near to understand anything about the “origin” of cosnciousness. Least of all ID.

    I am absolutely certain that consciousness is in essence immaterial. But that is my philosophical conviction. the best scienctific evidence that I can imagine about that are NDEs, and they are not related to ID theory.

  166. 166
    john_a_designer says:

    Gp @ #156,

    To be fair to Dawkins, I don’t think that he assumes that “design is impossible”. On the contrary, he is one of the few who admit that design could be a scientific explanation. He just does not accept it as a valid scientific explanation. That is epistemologically correct, even if of course completely wrong in the essence.

    Indeed, here is another stunning admission by Richard Dawkins:

    https://www.youtube.com/watch?v=BoncJBrrdQ8

    Dawkins concedes that (because nobody knows) first life on earth could have been intelligently designed– as long as it was an ET intelligence not an eternally existing transcendent Mind (God.)

    Of course other atheists have admitted the same thing. See the following article which refers to a paper written by Francis Crick and British chemist Leslie Orgel.

    https://blogs.scientificamerican.com/guest-blog/the-origins-of-directed-panspermia/

    I believe it was Crick and Orgel who coined the term directed panspermia.

    To be fair I think Dawkins later tried to walk back his position. Maybe Crick and Orgel did as well. But the point remains, until you prove how life first originated by mindless, purposeless “natural causes” intelligent design is a logical possibility– a very viable possibility.

    Ironically, in the Ben Stein interview Dawkins said that if life were intelligently designed (by space aliens) the scientific research may be able to discover their signature. Didn’t someone write a book about the origin of life with the word signature in the title? Who was that? I wonder if he picked up the idea from Dawkins. Does anyone know?

    Bonus question: Ben Stein was made famous by one word. Does anyone know what that one word was? Anyone?

  167. 167
    Silver Asiatic says:

    GP

    But it is true that ID is the first scientifc way to detect something that only conciousness can do: generate complex functional information.

    What I have been doing is questioning what ID can or cannot do and even questioning scientific assumptions along the lines of the ideas you’ve posted. You have explained your views on design and how consciousness is involved and even on whether the actions of conscious mind can be considered “creative acts”, as well as how we evaluate immaterial entities.
    I have always argued that ID is a scientific project but I could reconsider that. ID does not need to be scientific to have value. I’ll respond to JAD in the next post with some thoughts that I question myself on and just respond to his feedback, but your definitions of science and ID will also be included in my considerations.

  168. 168
    Silver Asiatic says:

    JAD

    Bonus question: Ben Stein was made famous by one word. Does anyone know what that one word was? Anyone?

    The kid in the movie – can’t remember his name. Travis?

    As whether or not ID is science. I am skeptical of the claim that Darwinism in the macro-evolutionary sense is science or that SETI is science (what empirical evidence is there that ETI’s exist?) How does NS + RV cause macro-evolutionary change? Science, needs to answer the question of how. Just saying “oh somehow it could” with any airy wave of the hand is not a sufficient explanation. But that applies for people on both sides of the debate.

    It’s a great point.
    I have argued for many years that ID is science. By that, I mean “the same science as Dawkins uses”. It is my belief that 90% of the scientists agree with Dawkins’ view of science – it’s the mainstream view.
    I also believed that ID was a subterfuge – an apologetic for the existence of God. I don’t see anything wrong with that.
    ID was going to use the exact same science that Dawkins uses, and then show that there is evidence of intelligent design. The method for doing that is to show that proposed natural mechanisms (RM + NS) cannot produce the observed effects. Intelligence can produce them, so Intelligence is the best, most probable inference.

    However, what I learned from many IDists over the years (GP pointed it out to me just previously) is that to accept ID, one needs a different science than what Dawkins uses. I find that to be a big problem. If, in order to accept ID, a person first needs “a different kind of science” than the normal, mainstream science of Dawkins, then there’s no reason to start talking about ID first. Instead, one should start to convince everyone that a different kind of science should be used throughout the world.

    Because for me, Dawkins’ version of science is fine. He just does what mainstream science does. They look at observations, collect data, propose causes. The first problem is that Dawkins’ mechanisms cannot produce the observed effects. So, even on his own terms, the science fails.

    However, when Dawkins says that science can only accept material causes, that doesn’t make a lot of sense – as you have pointed out. Additionally, he’s talking about a philosophical view.

    In that case, it is one philosophy versus another. The philosophy of ID vs Dawkins’ philosophical view. We can’t speak about science at that point.

    So, I hate to admit it because so many of my opponents over the years said this and I disagreed, but I do now accept that ID has always been a game to introduce God into the closed world of materialistic science. The difference in my view now is that I don’t see anything wrong with that game. Why not try to put God in science? What’s wrong with that? If the only way to do this is to trick materialist scientists using their own words, concepts and reasoning, again – what’s wrong with that? Dishonest? I don’t think so. The motive for using a certain methodology (ID in this case) has no bearing on what the methodology shows. In the same way, it doesn’t matter what belief an evolutionist has, they have to show that the observations can be explained from their theory.

    If, however, ID requires an entirely different science and philosophical view (that is possible also), then I don’t really see much need for the discussion on whether ID is a science or not. Why not just start with the idea that God exists, and then use ID observations to support that view? I don’t see why that is a problem. If IDists are saying “we don’t accept mainstream science”, then why appeal to mainstream science for credibility? Just create your own ID-science. But for me, I’m a religious believer with philosophical reasons for believing in God (as the best inference from facts and far more rational than atheism) so instead of trying to prove to everyone that we need a new science, I’d just start with God and then do science from that basis.

    That’s the way it would be if ID is not science.
    If, however, ID is science, for me that means “ID is the same science that Dawkins and all mainstream scientists use”. The inferences from ID can be shown using exactly the same data and observations that Dawkins uses.
    For me, that would give ID a lot more value.

  169. 169
    john_a_designer says:

    SA,

    [The following is something I posted on UD before which defines my position about I.D. Please note, however, I see it nothing more than just a personal opinion and I am not stating it in an attempt to change anyone’s mind. Indeed it remains tentative and subject to change but over the years I have seen no reason to change it.]

    Even though I think I.D. provokes some interesting questions I am actually not an I.D. proponent in the same sense that several other commenters here are. I don’t think I.D. is “science” (the empirical study of the natural world) any more than naturalism/materialism is science. So questions from materialists, like “who designed the designer,” are not scientific questions; they are philosophical and/or theological questions. However, many of the questions have philosophical/theological answers. For example, the theist would answer the question, “who designed the designer,” by arguing that the designer (God) always existed. The materialist can’t honestly reject that explanation because historically materialism has believed that the universe has always existed. Presently they are trying to shoehorn the multiverse into the discussion to get around the problem of the Big-Bang. Of course, this is a problem because there is absolutely no scientific evidence for the existence of a multiverse. In other words, it is just an arbitrary ad hoc explanation used in an attempt to try to wiggle out of a legitimate philosophical question.

    However, this is not to say that science can’t provoke some important philosophical and theological questions– questions which at present can’t be answered scientifically.

    For example:

    Scientifically it appears the universe is about 14.5 billion years old. Who or what caused the universe to come into existence? If it was “a what”– just natural causes– how do we know that?

    Why does the universe appear to exhibit teleology, or design and purpose? In other words, what is the explanation for the universes so-called fine tuning?

    How did chemistry create the code in DNA or RNA?

    How dose mindless matter “create” consciousness and mind? If consciousness and mind are “just an appearance” how do we know that?

    These are questions that arise out of science which are philosophical and/or theological questions. Is it possible that they could have scientific explanations? Possibly. But even if someday some of them could be answered scientifically that doesn’t make them at present illegitimate philosophical/theological questions, because we don’t know if they have, or ever could have, scientific answers.

    As far as philosophical naturalism goes, here is a summary of reasons why I do not think philosophical naturalism is sufficient as a world view.

    Naturalism (or materialism) cannot provide:

    *1. An ultimate explanation for existence. Why does anything at all exist?

    *2. An explanation for the nature of existence. Why does the universe appear to exhibit teleology, or Design and Purpose?

    *3. A sufficient foundation for truth, knowledge and meaning.

    *4. A sufficient foundation for moral values and obligations.

    *5. An explanation for what Aristotle called form and what we call information. Specifically how did chemistry create the code in DNA or RNA?

    *6. An explanation for mind and consciousness. How dose mindless matter “create” consciousness and mind? If consciousness and mind are just an appearance how do we know that?

    *7. An explanation for the apparently innate belief in the spiritual– a belief in God or gods, and the desire for immortality and transcendence.

    Of course the atheistic naturalist will dismiss numbers 6 or 7 as illusions and make up a just-so story to explain them away. But how do they know they are illusions? The truth is they really don’t know and they certainly cannot prove that they are. They just believe. How ironic to be an atheist/naturalist/ materialist you must believe a lot– well actually everything– on the basis of faith.

  170. 170
    PeterA says:

    JAD @169:

    “As far as philosophical naturalism goes, here is a summary of reasons why I do not think philosophical naturalism is insufficient as a world view”

    “do not think” “is insufficient”

    Is that the combination you wanted to express?

    I’m not sure if I understood it.

  171. 171
    gpuccio says:

    John_a_designer at #166:

    I agree with what you say about Dawkins. He is probably honest enougheven if completely wrong, but he is really obsessed by his antireligious crusade.

    The book you mention is “Signature in the Cell” by Stephen Meyer.

  172. 172
    gpuccio says:

    John_a_designer at #169:

    I agree with almost everything that you say, except of course that ID is not science. For me, it is science without any doubt. It has, of course, important philosophical implications, like many other important scientific theories (Big Bang, Quantum mechanics, Relativity, Dark energy, and so on).

  173. 173
    john_a_designer says:

    Peter A

    Final edit:

    “As far as philosophical naturalism goes, here is a summary of reasons why I do not think philosophical naturalism is sufficient as a world view.”

    That is what I meant to say and luckily corrected before the edit function timed out. Hopefully that makes sense now.

  174. 174
    john_a_designer says:

    Just to clarify, it’s not my view that ID doesn’t raise some very legitimate scientific questions. Behe’s discovery of irreducible complexity (IC) raises some important questions.

    For example, in his book Darwin’s Black Box, Michael Behe asks,

    “Might there be an as yet undiscovered natural process that would explain biochemical complexity? No one would be foolish enough to categorically deny the possibility. Nonetheless we can say that if there is such a process, no one has a clue how it would work. Further it would go against all human experience, like postulating that a natural process might explain computers… In the face of the massive evidence we do have for biochemical design, ignoring the evidence in the name of a phantom process would be to play the role of detective who ignore the elephant.” (p. 203-204)

    Basically Behe is asking, if biochemical complexity (irreducible complexity) evolved by some natural process x, how did it evolve? That is a perfectly legitimate scientific question. Notice that even though in DBB Behe was criticizing Neo-Darwinism he is not ruling out a priori some other mindless natural evolutionary process, “x”, might be able to explain IC.

    Behe is simply claiming that at the present there is no known natural process that can explain how irreducibly complex mechanisms and processes originated. If he and other ID’ist are categorically wrong then our critics need to provide the step-by-step-by-step empirical explanation of how they originated, not just speculation and wishful thinking. Unfortunately our regular interlocutors seem to only be able to provide the latter not the former.

    Behe made another point which is worth keeping in mind.

    “In the abstract, it might be tempting to imagine that irreducible complexity simply requires multiple simultaneous mutations – that evolution might be far chancier than we thought, but still possible. Such an appeal to brute luck can never be refuted… Luck is metaphysical speculation; scientific explanations invoke causes.”

    In other words, a strongly held metaphysical belief is not a scientific explanation.

    So why does Neo-Darwinism persist? I believe it is because of its a-priori ideological or philosophical fit with naturalistic or materialistic world views. Human being are hard wired to believe in something– anything to explain or make some sense of our existence. Unfortunately we also have a strong tendency to believe in a lot of untrue things.

    On the other hand, if IC is the result of design, it has to answer the question of how was the design instantiated. If ID wants to have a place at the table it has to find a way to answer questions like that. Once again, one of the primary things science is about is answering the “how” questions.

    Or as another example, ID’ists argue that the so-called Cambrian explosion can be better explained by an infusion of design. Okay that is possible. (Of course, I whole heartedly agree because I am very sympathetic to the concept of ID.) But how was the design infused to cause a sudden diversification of body plans? Did the “designer” tinker with the genomes of simpler life forms or were they specially created as some creationists would argue? (The so-called interventionist view.) Or were the new body plans somehow pre-programmed into their progenitors genomes (so-called front loading.) How do you begin to answer such questions that have happened in the distant past? At least the Neo-Darwinists have the pretense of an explanation. Can we get them to abandon their theory by declaring it impossible? Isn’t it at least possible, as Behe acknowledges, that there could be some other unknown natural explanation “x.”

    Is saying something is metaphysically possible a scientific explanation? The goal of science is to find some kind of provisional proof or compelling evidence. Why for example was the Large Hadron Collider built at the cost of billions of dollars (how much was it in euros?) Obviously it was because in science mere possibility is not the end of the line. The ultimate quest of science is truth and knowledge. Of course, we need to concede that science will never be able to explain everything.

  175. 175
    PeterA says:

    JAD @173,

    Yes, that makes much sense.

  176. 176
    PavelU says:

    OLV @139:

    The paper you cited doesn’t seem to support Behe’s polar bear argument.

  177. 177
    john_a_designer says:

    A few years ago here at UD one of our regular interlocutors who was arguing with me about the ID explanation for origin of life pointed out:

    the inference from that evidence to intelligence being involved is really indirect. You don’t have any other evidence for the existence of an intelligence during the times it would need to be around.

    I responded,

    “We have absolutely no evidence as to how first self-replicating living cell originated abiogenetically (from non-life). So following your arbitrarily made-up standard that’s not a logical possibility, so we shouldn’t even consider it… As the saying goes, ‘sauce for the goose is sauce for the gander.’”

    When you argue that life originated by some “mindless natural process,” that is not an explanation how. Life is not presently coming into existence by abiogenetically, so if such process existed in the past it no longer exists in the present. Therefore you are committing the same error which you accuse ID’ists of committing. That’s a double standard, is it not?

    This kind of reasoning on the part of materialists also reveals that they don’t really have any strong arguments based on reason, logic and the evidence. If they do, why are they holding back?

  178. 178
    gpuccio says:

    John_a_designer at #177:

    Exactly!

    That’s why I say that ID is fully scientific.

    Science has the duty to make good inferences from facts, without any a priori commitment to any specific worldview.

    That reality must behave according to our religious convictions is an a priori wordview. That’s why, as I have explained, I strive a lot to avoid, as much as humanly possible, any influence of my philosophy or theology on my scientific reasonings.

    That realy must behave according to our atheistic or materialistic convictions is an a priori wordview. That’s why our knid interlocutors should strive a lot to avoid, as much as humanly possible, any influence of their philosophy or atheology on their scientific reasonings.

    The simple fact is that ID theory, reasoning from facts in a perfectly scientific way, infers a process of design for the origin of biological objects.

    Now, our interlocutors can debate if our arguments are right or wrong from a scientific point of view. That’s part of the scientific debate.

    But the simple idea that we have no other evidence of the existence of conscious agent, for example, at the time of OOL is not enough. Because we have no evidence of the contrary, too.

    The simple idea that non physical conscious agents cannot exist is not enough, because it is only a specific philosophical conviction. Od course non physical conscious agents can exist. We don’t even know what consciousness is, least of all how it works and what is necessary for its existence.

    My point is: the design inference is real and perfectly scientific. All arguments about things that we don’t know are no reason to ignore that scientific inference. They are certainly valod reasons to pursue any further scientific investigation to increase our knowledge about those things. That’s perfectly legitimate.

    For example, I am convinced that our rapidly growing understanding of biology will certainly help to understand how the design was implemented at various times.

    And, even if ID is not a theory of consciousness, there is no doubt that future theories of consciousness can integrate ID and its results. For example, much can be done to understand better if a quantum interface between conscious representations and physical events is working in us humans, as many have proposed and as I believe. That same model could be applied to biological design in natural history.

    And of course, philosophy, physics, biophysics and what else can certainly contribute to a better understanding of consciousness, and of its role in reality.

    A better study of common events like NDEs can certainly contribute to understand what consciousness is.

    I would like to repeat hear a statment that I have made in the discussion with Silver Asiatic, that sums up well my position about science:

    Science can, and must, investigate, everything that can be observed in reality. And, from observed things, infer ever better models of reality. Given that very broad definition, there are no other limitations.

  179. 179
    Sven Mil says:

    Interesting conversation here,

    ‘sigma factor “is a bacterial transcription initiation factor that enables specific binding of RNA polymerase (RNAP) to gene promoters. It is homologous to archaeal transcription factor B and to eukaryotic factor TFIIB”.’

    “I have blasted sigma 70 from E. coli with human TFIIB and found no detectable homology (E value 1.4). So, there seems to be little conservation here.”

    Is there an explanation for this disagreement?

  180. 180
    gpuccio says:

    Sven Mil:

    “Is there an explanation for this disagreement?”

    Thank you for the comment and welcome to the discussion.

    Thank you also for addressing an interesting and specific technical point.

    It is not really a disagreement, probably only a different perspective.

    Reseacrhers interested in possible homologies (IOWs, in finding orthologs or paralogs for some gene) often use very sensitive algorithms. They find homologies that are often very weak, or maybe not real. Or they may look at structural homologies, that are not evident at the sequence level.

    My point of view is different. In order to debate ID in biology, I am only interested in definite homologies, possibly very high homologies conserved for a long evolutionary time. My aim is specificity, not sensitivity. Moreover, as I accept CD (as discussed in detail in this thread) I have no interest in denying possible weak homologies. I just ignore them, because they are not relevant to my argument.

    That’s why I always measure homology differences, not absolute homologies. I want to find information jumps at definite evolutionari times.

    Another possibility for the different result is that I have not blasted the right protein form. For brevity (it was nort really an important aspect of my discussion) I have not blaste all possible forms of sigma factors against eukaryotic factor TFIIB. I have just blasted sigma 70 from E. coli. Maybe a more complete search could detect some higher homology.

    OK, as you have raised the question, I have just checked the literature reference in the Wikipedia page:

    The sigma enigma: Bacterial sigma factors, archaeal TFB and eukaryotic TFIIB are homologs

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4581349/

    Abstract
    Structural comparisons of initiating RNA polymerase complexes and structure-based amino acid sequence alignments of general transcription initiation factors (eukaryotic TFIIB, archaeal TFB and bacterial sigma factors) show that these proteins are homologs. TFIIB and TFB each have two-five-helix cyclin-like repeats (CLRs) that include a C-terminal helix-turn-helix (HTH) motif (CLR/HTH domains). Four homologous HTH motifs are present in bacterial sigma factors that are relics of CLR/HTH domains. Sequence similarities clarify models for sigma factor and TFB/TFIIB evolution and function and suggest models for promoter evolution. Commitment to alternate modes for transcription initiation appears to be a major driver of the divergence of bacteria and archaea.

    As you can see from the abstract, they took into consideration structure similarities, not only sequence alignments.

    Maybe you can have a look at the whole article. Now I don’t think I have the time.

  181. 181
    gpuccio says:

    To all (specially UB):

    One interesting aspect of the NF-kB system discussed here is that, IMO, it can be seen as a polymorphic semiotic system.

    Let’s consider the core of the system: the NF-kB dimers in the cytoplasm, their inhibition by IkB proteins, and their activation by either the canonical or non canonical pathway, with the cooperation of the ubiquitin system. IOWs the central part of the system.

    This part is certainly not simple, and has its articulations, for example the different kinds of dimers that can be activated. However, when looking at the whole system, this part is relatively simple, and it uses a limited number of proteins. In a sense, we can say that there is a basic mechanism that works here, with some important variations.

    Well, like in all the many pathways that carry a signal from the membrane to the nucleus, even in this case we can consider the intermediate pathway (the central core just described) as a semiotic structure: indeed, it connects symbolically a signal to a response. The signal and the response have no direct biochemical association: they are separated, they do not interact directly, there is no direct biochemical law that derives the response from the signal.

    It’s the specific configuration of the central core of the pathway that translates the signal, semiotically coupling it to the response. So, that core can be considered as a semiotic operator that given the operand (the signal) produces the result (the response at nuclear level).

    But in this specific case there is something more: the operator is able to connect multiple operands to multiple specific results, using an essential bulk of tools. IOWs, the NF-kB system behaves as a multiple semiotic operator, or if we want as a polymorphic semiotic operator.

    Now, that is not an exclusive property of this system. Many membrane-nucleus pathways behave, in some measure, in the same way. Biological signals and their associations are never simple and clear-cut.

    But I would say that in the NF-kB system this polymorphic attitude reaches its apotheosis.

    There are many reasons for that:

    a) The system is practically universal: it works in almos all types of cells in the organism

    b) There is a real multitude of signals and receptors, of very different types. Suffice it to mention cytokine stimuli (TNF, IL1), bacterial or viral components (LPS), specific antigen recognition (BCR, TCR). Moreover, each of these stimuli is connected to the central core by a specific, often very complex, pathway (see the CBM signalosome, for example).

    c) There is a real multitude of responses, in different cells and in the same cell type in different contexts. Even if most of them are in some way related to inflammation, innate immune response or adaptive immune response, there are also responses about cell differentiation (neurons). In B and T cells, for example, the system is invlolved both in the differentiation of B and T cells and in the immune response of mature B and T cells after antigen recognition.

    This is a really amazing flexibility and polymorphism. A complex semiotic system that implements, with remarkable efficiency, a lot of different functions. This is engineering and programming of the highest quality.

  182. 182
    Silver Asiatic says:

    GP

    Science has the duty to make good inferences from facts, without any a priori commitment to any specific worldview.

    As I was discussing with JAD, I have always argued that ID is a scientific project. But I am tending now to see it as a philosophical proposition. Your statement above is a philosophical view. You are giving a framework for what you think science should be.
    But science cannot define itself or create its own limits. Science cannot even tell us what “matter” is or what it means for something to be “immaterial”. Those are philosophical concepts. Science also cannot tell us what causes are acceptable. Science cannot tell us that it should not have a commitment to a worldview.
    So, for example, if I wanted to do “my own science”, I could establish rules that I want. Nobody can stop me from that.
    I could have a rule: “For any observation that cannot be explained by known natural causes, we must conclude that God directly created what we observed”.
    There is nothing wrong with that if that is “my science”. Of course, if I want to communicate I would have to convince people to believe in my philosophy of science. But that would have nothing to do with science itself, but rather my efforts to convince people of my philosophical view.
    Now, we could have what we call “Dawkins Science”. I believe that’s what a majority of biologists accept today. Again, it is perfectly legitimate. Dawkins and all others like him will claim “science can only accept natural causes, or material causes”.
    So, they establish rules. Science cannot tell us if those rules are correct or not. It is only philosophy that says it.
    Then ID comes along, and IDists will say “ID is science”.
    Here is where I disagree.
    Whenever we make a sweeping statement about “science” we are talking about “the consensus”.
    If Dawkins is the consensus, then to claim “ID is science” means that it is perfectly compatible with Dawkins’ science.
    If, however, the claim “ID is science” means “you have to accept our version of science to accept ID”, then that’s a mistake.
    Again, to claim something “is science” usually means it is the consensus definition of science.
    To redefine science in any way one wants to, is not a scientific project. It is a philosophical project.
    If ID requires a specific kind of science that allows for non-natural causes, for example, then I would not call ID a scientific project.

    With that, even if science accepted non-natural causes, I would still consider ID to be philosophical. ID uses scientific data, but the conclusions drawn are non-scientific. Only if ID stopped by stating “this is evidence of intelligence” – that would be science. But once the conversation moves to the idea that “where there is intelligence, there must be an intelligent designer” – that is philosophical. Science cannot even define what intelligence is. Those definitions are part of the rules of science that come from a philosophical view.
    For example, there could be a pantheistic view that believes that all intelligence emerges from a universal mind which is present in all of reality. So, evidence of intelligence would not mean that there is an Intelligent Designer. It would only mean that the intelligence came from the spirit of the universe which is an impersonal spiritual force and is not a “designer” in that sense.

  183. 183
    Silver Asiatic says:

    GP & JAD
    Here is JAD’s comment on the topic of ID as science:

    Science itself rests on a number of empirically unprovable or metaphysical (philosophical) assumptions. For example:
    That we exist in a real special-temporal world– that the world (the cosmos) is not an illusion and we are not “brains in a vat” in some kind of Matrix like virtual reality.

    That is right. All science requires an a priori metaphysical commitment. “Mainstream science” has accepted one particular view. But nobody can say that view, or any view is “true science”. It comes down to the philosophical view of “what is reality”? Are there real distinctions between things or are those distinctions arbitrary? Western philosophy tells us one thing, but there are other philosophical views.

    Again, if ID is saying that “Dawkins is using the wrong kind of science”, then that’s a philosophical debate about what science should be.

    For me if ID can be fully compatible with the science that Dawkins uses, then that’s powerful. In that case, I think it would be more reasonable to say that “ID is science” since it is using the exact same understanding of science that people like Dawkins use.

  184. 184
    gpuccio says:

    Silver Asiatic:

    OK, I disagree with you about many things. Not all.

    Let’s see if I can explain my position.

    You quote my statement:

    “Science has the duty to make good inferences from facts, without any a priori commitment to any specific worldview.”

    And then you say that this is a philosophical view. And I absolutely agree.

    That was clearly a statement of my position about philosophy of science. Philosophy of science is phisolophy.

    I usually don’t discuss my philosophy here, except of course my philosophy of science, which is absolutely pertinent to any scientific discussion. So yes, when I say that science has the duty to make good inferences from facts, without any a priori commitment to any specific worldview, I am making a statement about philosophy of science.

    I also absolutely agree that “science cannot define itself or create its own limits”. It’s philosophy of science that must do that.

    Where I absolutely discgree with you is in the apparemt idea that philosophy of science is a completely subjective thing, and that everyone can “make his own rules”. That is completely untrue. Philosophy is not subjective, as science is not objective. They are different, but both are rather objective, with many subjective aspects.

    There is good philosophy and bad philosophy, as there is good science and bad science. And, of course, there is bad philosophy of science.

    You say: “So, for example, if I wanted to do “my own science”, I could establish rules that I want. Nobody can stop me from that.”

    It is true that nobody can stop you, but it is equally true that it can be bad science, and everyone has a right to judge for himself if it is good science or bad science.

    The same is true for philosophy of science.

    The really unbearable part of you discourse is when you equate science to consensus. This is a good example of bad philosophy of science. For me, of course. And for all those who waht to agree. There is no need that we are the majority. There is no nedd for consensus.

    Good science and good philosophy of science must be judged in the sacred intimacy of our own cosnciousness. we are fully responsible for our judgement, and we have the privilege and the duty to defend that judgement and share it with others, whether they agree or not, whether there is consensus about it or it is shared only by a minority.

    Because in the end truth is the measure of good science and of good philosophy. Nothing else.

    Consensus is only an hisotrical accident. Sometimes there is consensus for good thingsm many times for bad things. Consensus for bad science does not make it good science. Ever.

    Then you insist:

    “If ID requires a specific kind of science that allows for non-natural causes, for example, then I would not call ID a scientific project.”

    ID requires nothing like that. ID infers a process of design. A process of design requires a designer. There is nothing non natural in that. Therefore ID is science.

    Moreover, I could show, as I have done many times, the the word “natural” is wholly misleading. In the end, it just mean “what we accept according to our present worldview”. In that sense, any form of naturalism is the end of true science. Naturalism is a good example of bad philosophy of science.

    And I know, that is not the consensus. I know that very well. But it is not “my own rule”. It is a strong philosophical belief, that I am ready to defend and share with anyone, and to which I will remain loyal to the end of my days, unlessof course I do not find some day some principle that is even better.

    Just a final note. You say: ” Science cannot even tell us what “matter” is or what it means for something to be “immaterial”. Those are philosophical concepts.”

    Correct. And I don’t think that even philosophy has good answers, at present, about those things. Indeed, I think that “matter” and “immaterial” are vague concepts.

    But science can be more precise. For example, science can define if sometning has mass or not. Some entities in reality have mass, others don’t. This is a scientific statement.

    in our discussion, I did not use the word “immaterial”. That word was introduced by you. I just stated, answering your question, that it seemed reasonable that the biological designer(s) did not have a physical body like us, because otherwise there should be some observable trace of that fact. This implies no sophisticated philosophical theory about what matter is. I suggested that, as we know that matter exists but we don’t know what it is, it is not unreasonable to think that it can exist without a physical body like ours. Not only it is not unreasonable, but indeed most people have believed exactly that for millennia, and even today, probably, most people believe that.

    I could add that observable facts like the reports of NDEs strongly suggest that hypothesis.

    True of false that it may be, the hypothesis that cosnciousness is not always linked to a physical body like ours is a reasonable idea. There is no reason at all to consider that idea “not natural” or to ban it a priori from any scientific theory or scenario. To do that is to do bad science and bad philosophy of science, driven by a personal philosophical committment that has no right to be imposed on others.

  185. 185
    gpuccio says:

    Silver Asiatic:

    “For me if ID can be fully compatible with the science that Dawkins uses, then that’s powerful.”

    But ID is fully compatible with the science that Dawkins uses. It’s Dawkins who uses that science badly and defends wrong theories. It’s Dawkins who refutes the good theories of ID because of ideological prejudices. We cannot do nothing about that. It’s his presonal choice, and he is a free individual. But there is no reason at all to be influenced or conditioned by his bad scientific and philosophical behaviour.

  186. 186
    Silver Asiatic says:

    GP

    I think you’re being inconsistent. That’s one thing I’m trying to point out. You agree that your statement about science (and therefore your foundation for ID) is a philosophical position. However, you often state something like this:

    That’s why, as I have explained, I strive a lot to avoid, as much as humanly possible, any influence of my philosophy or theology on my scientific reasonings.

    But it is simply not possible to avoid your philosophical view since that view is the basis of all your understanding of science and your scientific reasoning. In fact, I would say it’s unreasonable to insist that you’re trying to avoid your philosophical view. Why would you do that? Your philosophy is the most important aspect of your science. Why conceal it as if you could do science without a philosophical starting point?

    At the risk of irritating you, I feel the need to repeat something continually through my response – and that is, almost everything you said was a philosophical discourse. I have been discussing on one of KF’s threads the objective foundation of philosophy, but after that (which is minimal) philosophy is almost entirely subjective. We can freely choose among options.

    Philosophy is not subjective, as science is not objective. They are different, but both are rather objective, with many subjective aspects.

    I disagree here and I offered a long explanation in debating with atheists on KF’s most recent thread. The only objective thing about philosophy is the starting point – that truth has a greater value than falsehood. We cannot affirm a value for falsehood. But after that, even the first principles of reason are not entirely objective. They must be chosen, for a reason. A person must decide to think rationally. For reasons of virtue which are inherent in the understanding of truth, we have an obligation to use reason. But this obligation is a matter of choice.

    It is true that nobody can stop you, but it is equally true that it can be bad science, and everyone has a right to judge for himself if it is good science or bad science.

    My repeated phrase here: That’s a philosophical view. Secondly, you are appealing to consensus “everyone can judge”. There are some cultures that forbid a Western approach to science. Their consensus will say that “mainstream science” is bad science. They have different goals and purposes in life. I think of indigenous cultures, for example, or some religions where they approach science differently.

    Good science and good philosophy of science must be judged in the sacred intimacy of our own cosnciousness. we are fully responsible for our judgement, and we have the privilege and the duty to defend that judgement and share it with others, whether they agree or not, whether there is consensus about it or it is shared only by a minority.
    Because in the end truth is the measure of good science and of good philosophy. Nothing else.

    In this case, truth follows from first principles. Science is not an arbiter of truth, it is only a method that follows from philosophy in order to gain understanding, for a reason. If a science follows logically from its first principles, then it is good science. I gave an example of a different kind of science where I could say that God is a cause. Or we could talk about Creation Science where the Bible establishes rules for science. Those are different first principles – different philosophical starting points. Creationism is perfectly legitimate philosophy and if science follows from it logically, then the science is “good science”. We may have a reason to reject Creationist philosophy but that cannot be done on an entirely objective basis. We decide based on the priority we give to certain values. We want something, so we want a science that supports what we want. But people can want different things.

    ID requires nothing like that. ID infers a process of design. A process of design requires a designer. There is nothing non natural in that. Therefore ID is science.

    Again, you offer your philosophical view. In your view, a process of design requires a designer. That is philosophy. If a person accepts your philosophy, then they can accept your ID science. I think the more usual statement of ID is that “we can observe evidence of intelligence” in various things. What I have not seen is that “all intelligent outputs require a designer”. That is a philosophical statement, not a scientific one. Science cannot establish that all intelligence necessarily comes from “a designer” or even what the term “a designer” means in this context. All science can do is say that something “looks like it came from a source that we have already classified as ‘intelligence'”. If that source is “a designer”, we do not know.

    Consensus is only an hisotrical accident. Sometimes there is consensus for good things many times for bad things. Consensus for bad science does not make it good science. Ever.

    Again, these are philosophical concepts. Even to judge good science versus bad science requires a correlation with philosophical starting points. Again, there is no such thing as “good science” as if “science” exists as an independent agent. Science is a function of philosophical principles. If the science aligns with the principles, then it is coherent and rational (but even that is not required). But it is impossible to judge if science is good or bad without first accepting a philosophical basis. The idea that only material causes can be accepted in science is a perfectly valid limitation. To disagree with it and prefer another definition is a philosophical debate and it will come down to “what do we want to achieve with science”? There is nothing objective about that. Science is a tool used for a purpose and there is nothing that says “science must only have this purpose and no other”. People choose one philosophy of science or another. There is no good or bad. There can be contradictory or irrational application of science — where science conflicts with the stated philosophy. For example, if Dawkins said “science can only accept material causes” and then said later that “science has indicated that a multiverse exists outside of time, space and matter” – that would be contradictory. We could call that “bad science” because it is irrational. But even there, a person is not required, necessarily, to be entirely rational in all aspects of life. We are required to be honest and to tell the truth. But if Dawkins said, that he makes an exception for a multiverse, his science remains just as “good” as any. Science is not absolute truth. It’s a collection of rules used for measurement, classification, experiment to arrive at understanding within a certain context.

    Moreover, I could show, as I have done many times, the the word “natural” is wholly misleading. In the end, it just mean “what we accept according to our present worldview”. In that sense, any form of naturalism is the end of true science. Naturalism is a good example of bad philosophy of science.

    Again, this is entirely a philosophical view. There is nothing wrong with a science that says “we only accept what accords with our worldview”. That’s a philosophical starting point. People may have a very good reason for believing that. Or not. So, all of their science will be “natural” in that sense. Again, there is no such thing as “true science”. You are not the arbiter of such a thing. Even to say that “all science must follow strictly logical processes” is a philosophical bias. There can be scientific philosophies that accept non-logical conclusions and various paradoxical understandings.

    And I know, that is not the consensus. I know that very well. But it is not “my own rule”. It is a strong philosophical belief, that I am ready to defend and share with anyone, and to which I will remain loyal to the end of my days, unlessof course I do not find some day some principle that is even better.

    When it say that it is “your own rule” I mean it is a rule that you have chosen to accept. You could have chosen another, like the consensus view. That is what I would prefer for ID, that it accept the consensus view on what “natural” means and basically all the consensus rules of science. I would not like to have to say that “ID requires a different understanding of terms and of science, than the consensus does”. But even if not, ID researchers are free to have their own philosophical starting points and defend them, as you would do. But as I said, I think the only aspect of philosophy that we are compelled to accept is the proto-first principles. Even there, a person must accept that thinking rationally is a duty. As I said, there can be philosophical systems that do not hold logic, analysis, and rational thought as the highest virtue. There can be other values more important to human life which would leave rational thought as a secondary value, and therefore not absolutely required in all cases. So, a contradictory scientific result would not be a problem in that philosophical view.

    True of false that it may be, the hypothesis that cosnciousness is not always linked to a physical body like ours is a reasonable idea.

    Yes, exactly. Science can tell us nothing about this. Your view would be reasonable as matched against your philosophy. Again, it depends if a person has a philosophical view that could accept such a notion. If the belief is that everything that exists is physical, then your point here would not be rational. The science would have nothing to do with it except to be consistent with one view or another.

    For example, science can define if sometning has mass or not. Some entities in reality have mass, others don’t. This is a scientific statement.

    I wouldn’t call that a “definition”. It is more like a classification. Science cannot define what “mass” is. There is no observation in nature that we can make to tell us that “this is the correct definition of mass”. In fact, there could be a philosophical view that does not recognize mass as an independent thing that could be classified. But there is a consensus view that has defined mass as a characteristic. Then science observes things and classifies them to see if they share what that thing (mass) is or not.

  187. 187
    gpuccio says:

    Silver Asiatic:

    I have not the time now to answer all, but I want to clarify one point that is important, and that was not clear probably for my imprecision.

    When I say:

    “That’s why, as I have explained, I strive a lot to avoid, as much as humanly possible, any influence of my philosophy or theology on my scientific reasonings.”

    I am not including in that statement philosophy of science. My mistake, I apologize, I should have specified it, but you cannot think of everything.

    Of course I believe that our philosophy of science can and must guide the way we do science. Probably, it seemed so obvious to me that I did not think of specifying it.

    What I meant was that our philosophy about everything else must not influence, as far as that is possible, our scientific reasoning.

    As I have said, there is good science and bad science, good philosophy of science and bad philosophy of science. One is responsible both for his science and for is philosophy of science. But of course we have a duty to do science according to our philosophy of science. What else should we do?

    However, even if of course there can be very different philosophies of science, some basic points should be very clear. I think that almost all who do good science would agree about the basic importance of facts in scientific reasoning. So, any philosophy of science, and related science, that does not put facts at the very center of scientific reasoning is a bad philosophy of science. For me (because I assume full responsibility for that statement), but not in the sense that I consider that a subjective aspect. For me, that is an objective requirement of a good philosophy of science.

    OK, more later.

  188. 188
    gpuccio says:

    Silver Asiatic:

    At the risk of irritating you, I feel the need to repeat something continually through my response – and that is, almost everything you said was a philosophical discourse. I have been discussing on one of KF’s threads the objective foundation of philosophy, but after that (which is minimal) philosophy is almost entirely subjective. We can freely choose among options.

    I disagree. My discourses here are rarely philosophical. Well, sometimes. But all my reasonings about ID detection, functional information, biology, functional information in biology, homologies, common descent, and so on, in practice most of what I discuss here is perfectly scientific, and in no way philosophical.

    Of course, as said, my science is always guided by my philosophy of science. I take full responsibility for both.

    And I fully disagree that “philosophy is almost entirely subjective”. That’s not true. There is much subjectivity in all human activities, including philosophy, science, art, and so on. But there is also a lot of objectivity in all those things.

    One thing is certainly true: “We can freely choose among options.” Of course. In everything.

    We can freely pursue what is true or what is wrong. What is good or what is bad. We can freely lie, or fight for what we believe to be true. We can freely love or hate. And so on. I think I give the idea.

    Does that mean that truth, good, lies, love, are in no way objective?

    I don’t believe that. But of course you can freely choose what to believe.

    And yes, this is a philosophical statement.

  189. 189
    Silver Asiatic says:

    GP

    For me (because I assume full responsibility for that statement), but not in the sense that I consider that a subjective aspect. For me, that is an objective requirement of a good philosophy of science.

    Right. Based on your philosophy and worldview it is objective. That is consistent and makes sense. Philosophically, you call some things “facts” and then you use those in your scientific reasoning. You have an overall understanding of reality. I’ll suggest that you cannot really separate “everything else” of your philosophy from your scientific view. As I see it, they’re all connected. This is especially true when you seek to talk about a designer or things like randomness or immaterial, natural, entities — all of these things.

    This is where I agree that “ID is science” as long as “ID lines up with my philosophy of science”. To me, that is consistent and reasonable (although whether the philosophy and definitions should be aligned could be debated).

    Someone like Dawkins will say “ID is not science” because he thinks that ID does not line-up with his philosophy of science. He has just defined ID out of the question. Dawkins will fail if he says: “My philosophy is consistent and rational and my science follows this”. But then later he indicates that he will not accept conclusions that his own scientific philosophy will support. Then he’s got a problem.

    I always thought that’s what ID was trying to do. Use Dawkins’ own worldview and his own claims – all the things he already accepts — and show that ID is the most reasonable conclusion. It would all be based on his (or mainstream) science.

    I know some creationists who say ID is “dishonest” because the worldview is concealed, but I think ID is just trying to play by the rules of the game (consensus view) and show that there is evidence for Design even using mainstream evolutionary views.

  190. 190
    Silver Asiatic says:

    GP

    We can freely pursue what is true or what is wrong. What is good or what is bad. We can freely lie, or fight for what we believe to be true. We can freely love or hate. And so on. I think I give the idea.

    I realize that this may seem irritating, but I even caught myself with that. There are people, perhaps, who think that all of our actions are determined by some cause. It’s the whole question of free-will.
    My point here is that I think a coherent philosophy, beginning with first principles, has to be in place. After that, the people that we talk with have to either understand, or better, accept our philosophy.
    If they have a bad philosophy, then I think the problem is to help them fix that. I think that has to happen before we can even get into the science.

    My philosophy is rooted in classical Western theism and is linked to my theological views. I am leaning more and more to the idea that it is not worth the effort to adopt “Dawkins philosophy/science” for the sake of trying to convince people, and that it may be more effective to start with the clash of philosophies and world-views rather than start with science (ID). Not sure, but I am leaning that way. Putting philosophy and theology first, and then using ID inferences to support that might work better.

  191. 191
    gpuccio says:

    Silver Asiatic:

    I always thought that’s what ID was trying to do. Use Dawkins’ own worldview and his own claims – all the things he already accepts — and show that ID is the most reasonable conclusion. It would all be based on his (or mainstream) science.

    That’s definitely what ID is trying to do. That’s certainly what I am trying to do.

    I am leaning more and more to the idea that it is not worth the effort to adopt “Dawkins philosophy/science” for the sake of trying to convince people, and that it may be more effective to start with the clash of philosophies and world-views rather than start with science (ID). Not sure, but I am leaning that way. Putting philosophy and theology first, and then using ID inferences to support that might work better.

    Maybe. But I think the two thinks can and should work in parallel. There is no conflict at all, as far as each activity is guided by its good and pertinent philosophy! 🙂

    And, at least for me, the purpose is not to convince anyone, but to offer good idea to those who may be interested in them. In the end, I very much believe in free will, and free will is central not only in the moral, but also in the cognitive sphere.

  192. 192
    gpuccio says:

    To all:

    Again about crosstalk.

    It seems that our NF-kB system is continuously involved in crosstalks of all types.

    This is about crosstalk with the system of nucleoli:

    Crosstalk between NF-kB and Nucleoli in the Regulation of Cellular Homeostasis

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6210184/

    Abstract
    Nucleoli are emerging as key sensors of cellular stress and regulators of the downstream consequences on proliferation, metabolism, senescence, and apoptosis. NF-kB signalling is activated in response to a similar plethora of stresses, which leads to modulation of cell growth and death programs. While nucleolar and NF-kB pathways are distinct, it is increasingly apparent that they converge at multiple levels. Exposure of cells to certain insults causes a specific type of nucleolar stress that is characterised by degradation of the PolI complex component, TIF-IA, and increased nucleolar size. Recent studies have shown that this atypical nucleolar stress lies upstream of cytosolic I?B degradation and NF-kB nuclear translocation. Under these stress conditions, the RelA component of NF-kB accumulates within functionally altered nucleoli to trigger a nucleophosmin dependent, apoptotic pathway. In this review, we will discuss these points of crosstalk and their relevance to anti-tumour mechanism of aspirin and small molecule CDK4 inhibitors. We will also briefly the discuss how crosstalk between nucleoli and NF-kB signalling may be more broadly relevant to the regulation of cellular homeostasis and how it may be exploited for therapeutic purpose.

    Emphasis mine.

    And this is about crosstalk with Endoplasmic Reticulum:

    The Crosstalk of Endoplasmic Reticulum (ER) Stress Pathways with NF-kB: Complex Mechanisms Relevant for Cancer, Inflammation and Infection.

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6027367/

    Abstract
    Stressful conditions occuring during cancer, inflammation or infection activate adaptive responses that are controlled by the unfolded protein response (UPR) and the nuclear factor of kappa light polypeptide gene enhancer in B-cells (NF-kB) signaling pathway. These systems can be triggered by chemical compounds but also by cytokines, toll-like receptor ligands, nucleic acids, lipids, bacteria and viruses. Despite representing unique signaling cascades, new data indicate that the UPR and NF-kB pathways converge within the nucleus through ten major transcription factors (TFs), namely activating transcription factor (ATF)4, ATF3, CCAAT/enhancer-binding protein (CEBP) homologous protein (CHOP), X-box-binding protein (XBP)1, ATF6? and the five NF-kB subunits. The combinatorial occupancy of numerous genomic regions (enhancers and promoters) coordinates the transcriptional activation or repression of hundreds of genes that collectively determine the balance between metabolic and inflammatory phenotypes and the extent of apoptosis and autophagy or repair of cell damage and survival. Here, we also discuss results from genetic experiments and chemical activators of endoplasmic reticulum (ER) stress that suggest a link to the cytosolic inhibitor of NF-kB (IkB)alpha degradation pathway. These data show that the UPR affects this major control point of NF-kB activation through several mechanisms. Taken together, available evidence indicates that the UPR and NF-kB interact at multiple levels. This crosstalk provides ample opportunities to fine-tune cellular stress responses and could also be exploited therapeutically in the future.

    Emphasis mine.

    Another word that seems to recur often is “combinatorial”.

    And have you read? These two signaling pathways “converge within the nucleus through ten major transcription factors (TFs)”. Wow! 🙂

  193. 193
    OLV says:

    GP,

    the topic you chose for this OP is fascinating indeed.

    Here’s a related paper:

    Transcription factor NF-kB in a basal metazoan, the sponge, has conserved and unique sequences, activities, and regulation
    Leah M. Williams, Melissa M. Inge, Katelyn M. Mansfield, Anna Rasmussen, Jamie Afghani, Mikhail Agrba, Colleen Albert, Cecilia Andersson, Milad Babaei, Mohammad Babaei, Abigail Bagdasaryants, Arianna Bonilla, Amanda Browne, Sheldon Carpenter, Tiffany Chen, Blake Christie, Andrew Cyr, Katie Dam, Nicholas Dulock, Galbadrakh Erdene, Lindsie Esau, Stephanie Esonwune, Anvita Hanchate, Xinli Huang, Timothy Jennings, Aarti Kasabwala, Leanne Kehoe, Ryan Kobayashi, Migi Lee, Andre LeVan, Yuekun Liu, Emily Murphy, Avanti Nambiar, Meagan Olive, Devansh Patel, Flaminio Pavesi, Christopher A. Petty, Yelena Samofalova, Selma Sanchez, Camilla Stejskal, Yinian Tang, Alia Yapo, John P. Cleary, Sarah A. Yunes, Trevor Siggers, Thomas D. Gilmore

    doi: 10.1101/691097
     

    Biological and biochemical functions of immunity transcription factor NF-?B in basal metazoans are largely unknown. Herein, we characterize transcription factor NF-?B from the demosponge Amphimedon queenslandica (Aq), in the phylum Porifera. Structurally and phylogenetically, the Aq-NF-?B protein is most similar to NF-?B p100 and p105 among vertebrate proteins, with an N-terminal DNA-binding/dimerization domain, a C-terminal Ankyrin (ANK) repeat domain, and a DNA binding-site profile more similar to human NF-?B proteins than Rel proteins. Aq-NF-?B also resembles the mammalian NF-?B protein p100 in that C-terminal truncation results in translocation of Aq-NF-?B to the nucleus and increases its transcriptional activation activity. Overexpression of a human or sea anemone I?B kinase (IKK) can induce C-terminal processing of Aq-NF-?B in vivo, and this processing requires C-terminal serine residues in Aq-NF-?B. Unlike human NF-?B p100, however, the C-terminal sequences of Aq-NF-?B do not effectively inhibit its DNA-binding activity when expressed in human cells. Tissue of another demosponge, a black encrusting sponge, contains NF-?B site DNA-binding activity and an NF-?B protein that appears mostly processed and in the nucleus of cells. NF-?B DNA-binding activity and processing is increased by treatment of sponge tissue with LPS. By transcriptomic analysis of A. queenslandica we identified likely homologs to many upstream NF-?B pathway components. These results present a functional characterization of the most ancient metazoan NF-?B protein to date, and show that many characteristics of mammalian NF-?B are conserved in sponge NF-?B, but the mechanism by which NF-?B functions and is regulated in the sponge may be somewhat different.

  194. 194
    gpuccio says:

    To all:

    OK, this very recent paper (published online 2019 Jan 8) seems to be exactly about what I discuss in the OP:

    On chaotic dynamics in transcription factors and the associated effects in differential gene regulation

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6325146/

    The abstract:

    Abstract

    The control of proteins by a transcription factor with periodically varying concentration exhibits intriguing dynamical behaviour. Even though it is accepted that transcription factors vary their dynamics in response to different situations, insight into how this affects downstream genes is lacking. Here, we investigate how oscillations and chaotic dynamics in the transcription factor NF-kB can affect downstream protein production. We describe how it is possible to control the effective dynamics of the transcription factor by stimulating it with an oscillating ligand. We find that chaotic dynamics modulates gene expression and up-regulates certain families of low-affinity genes, even in the presence of extrinsic and intrinsic noise. Furthermore, this leads to an increase in the production of protein complexes and the efficiency of their assembly. Finally, we show how chaotic dynamics creates a heterogeneous population of cell states, and describe how this can be beneficial in multi-toxic environments.

    I think I will read it carefully and come back about it later. 🙂

  195. 195
    gpuccio says:

    To all:

    The paper linked at #194 is really fascinating. I given it a first look, but I will certainly go back to digets better some aspects (probably not the differential equations! 🙂 ).

    Two of the authors are from the Niels Bohr Institue in Copenaghen, a really interesting institution. The third author is from Bangalore, India.

    For the moment, let’s start with the final conclusion (I have never been a tidy person! 🙂 ):

    Chaotic dynamics has thus far been underestimated as a means for controlling genes, perhaps because of its unpredictability. Our work shows that deterministic chaos potentially expands the toolbox available for single cells to control gene expression dynamically and specifically. We hope this will inspire theoretical and experimental exploration of the presence and utility of chaos in living cells.

    The emphasis on “toolbox” is mine, and the reason I have added it should be rather self-evident. 🙂

    Let’s think about that.

  196. 196
    gpuccio says:

    To all:

    Indeed, I have not been really precise at #194, I realize. I said:

    “OK, this very recent paper (published online 2019 Jan 8) seems to be exactly about what I discuss in the OP:”

    But that is not really true. This paper indeed adds a new concept to what I have discussed in the OP. In fact the paper, while briefly discussing also random noise, is mainly about the effects of a chaotic system, something that I had not considered in any detail in my OP. My focus there has been on random noise and far from equilibrium dynamics. Chaos systems certainly add a lot of interesting perspective to our scenario.

  197. 197
    gpuccio says:

    OLV at #193:

    Interesting paper.

    Indeed I blasted the human p100 protein agains sponges, and there is a good homology (total bitscore 523 bits).

    So yes, the system is rather old in metazoa.

    Consider that the same protein, blasted against single celled eukaryotes, gives only a low homology (about 100 bits), limited to the central ANK repeats. No trace of the DNA binding domain.

    So, the system seems really to arise in Metazoa, and very early.

  198. 198
    EugeneS says:

    GP #129,

    Thanks very much. I will give it a read.

    Life comes from life, once it has been started, that is for sure. However, it does not apply equally either to creation (design, for the purposes of this discussion) or to imagined abiogenesis. It is clear that the vitalistic rule is violated in the case of abiogenesis, but it is also violated in the case of design because the relation between the designer and the designed is not that of birth/descent. It can be more likened to the relation between the painter and the painting. Fundamentally, the painting is of a different nature from the painter, whereas descent implies the same nature between the ancestor and the progeny.

    As an aside, a grumpy remark, I do not like the new GUI on this blog 😉 The old one was way better. This one feels like one of .gov British sites for the plain English campaign. It is less convenient when accessed with a mobile phone. But it does not matter…

  199. 199
    Silver Asiatic says:

    EugeneS

    However, it does not apply equally either to creation (design, for the purposes of this discussion) or to imagined abiogenesis. It is clear that the vitalistic rule is violated in the case of abiogenesis, but it is also violated in the case of design because the relation between the designer and the designed is not that of birth/descent. It can be more likened to the relation between the painter and the painting. Fundamentally, the painting is of a different nature from the painter, whereas descent implies the same nature between the ancestor and the progeny.

    That is a great point and analogy. Yes, I think where there is design then there is a purposeful, creative act and what follows from that cannot be considered descent for the reason you give.

  200. 200
    gpuccio says:

    EugeneS:

    That is an important point.

    The question is: can life be reduce to the designed information that sustains it?

    If that is the case, then design explains everything, both at OOL and later.

    If the anwer is no, all is different.

    As we still don’t understand what life is, from a scientific point of view, we have no final scientific answer. My personal opinion is that the second option is true, and that would explain why in our experience life comes only from life.

    If life cannot be reduced to the designed information that sustains it, then certainly OOL is a case where both a lot of designed functional information appears and life is started, whatever that implies.

    For what happens after OOL, all depends on the model one accepts. I don’t know if you have followed the discussion here between BA and me. In particular, the three possible models I have discussed at #43.

    In my model (model b in that post) after OOL things happen by descent with added design. So, in that model, it is true after OOL that life always comes from life (if the descent is universal), and only OOL would be a special event in that sense. The new functional information, in all cases, is the product of design interventions.

    In model c, instead, each new “kind” (to use BA’s term) is designed from scratch at some time. So, the appearance of each new kind has the same status as an OOL event.

    Model a is just the neo-darwinian model, where everything, at all times, happens by RV + NS, and no design takes place, least of all a special, information independent start of life.

  201. 201
    john_a_designer says:

    Gp,

    I am still trying to define precisely what a transcription factor is. Earlier @ 91, I asked ”are there transcription factors for prokaryotes?” According to Google, no.

    Eukaryotes have three types of RNA polymerases, I, II, and III, and prokaryotes only have one type. Eukaryotes form and initiation complex with the various transcription factors that dissociate after initiation is completed. There is no such structure seen in prokaryotes.

    https://uncommondescent.com/intelligent-design/controlling-the-waves-of-dynamic-far-from-equilibrium-states-the-nf-kb-system-of-transcription-regulation/#comment-680819

    (But maybe what I am not understanding is the result of a difference of semantics, context or nuance.)

    Recently, I ran across another source which seemed to suggested that prokaryotes do have TF’s.

    What has to happen for a gene to be transcribed? The enzyme RNA polymerase, which makes a new RNA molecule from a DNA template, must attach to the DNA of the gene. It attaches at a spot called the promoter.

    In bacteria, RNA polymerase attaches right to the DNA of the promoter. You can see how this process works, and how it can be regulated by transcription factors, in the lac operon and trp operon videos.

    In humans and other eukaryotes, there is an extra step. RNA polymerase can attach to the promoter only with the help of proteins called basal (general) transcription factors. They are part of the cell’s core transcription toolkit, needed for the transcription of any gene.

    https://www.khanacademy.org/science/biology/gene-regulation/gene-regulation-in-eukaryotes/a/eukaryotic-transcription-factors

    This article seems to suggest that the lac operon is a transcription factor but then in the next paragraph it states: “In humans and other eukaryotes, there is an extra step. RNA polymerase can attach to the promoter only with the help of proteins called basal (general) transcription factors.”

    So is the lac operon a transcription factor? Is the term operon synonymous with transcription factor or is there a difference? In other words, do “operons” have same role in transcription as TF’s?

    Is there a strong homology between the lac operon which turns on the gene for lactose metabolism in e coli and the TF/lactose metabolism gene in eukaryotes, including humans? Does this have anything to do with lactose intolerance?

  202. 202
    gpuccio says:

    John_a_designer:

    OK, that’s how I see it.

    In eukaryotes we must distinuish between general TFs, which act in much the same way in all genes and are required to initiate transcription by helping recruit RNA polymerase at the promoter site, and specific TFs, that bind at enhancer sites and activate or repress transcription of specific genes. The NF-kB system described in the OP is a system of specific TFs.

    Now, in eukaryotes there are six general TFs. Archea have 3. In bacteria sigma factors have the role of general TFs. Sigma factors, archeal general TFs and eukaryotic general TFs seem to share some homology. I think that the archeal system, however, is much more similar to the eukaryotic system, and that includes RNA polymares.

    Then bacteria have a rather simple system of repressors or activators, specific for specific genes, or better operones,. Those repressors and activators bind DNA near the promoter of the specific operon. They are in some way the equivalent of eukaryotic specific TAs, but the system in by far simpler.

    You can find some good information about bacteria here:

    https://bio.libretexts.org/Bookshelves/Cell_and_Molecular_Biology/Book%3A_Cells_-_Molecules_and_Mechanisms_(Wong)/9%3A_Gene_Regulation/9.1%3A_Prokaryotic_Transcriptional_Regulation

    The operon is simply a collection of genes that are physically near, are transcribed together from one single promoter, and are functionally connected.

    So, the lac operon is formed by three genes, lacZ, lacY, lacA, sharing one promoter. A sigma factor binds at the promoter, together with RNA polymerase. A repressor and an activator may bind DNA near the promoter to regulate operon transcription.

    While archaea are more similar to eukaryotes in the system of general TF, the regulation of transcription by one or two suppressor or activator seems to be similar to what described for bacteria.

    Finally, there is another important aspect where archaea are more similar to eukarya. Their chromatin structure is based on hisotnes and nucleosome, like in eukaryotes, but the system is rather different from the corresponding eukaryotic system.

    Instead, bacteria have their form of DNA compression, but it is not based on histones and nucleosomes.

    This, as far as I can understand.

  203. 203
    john_a_designer says:

    Thank you Gp,

    The link you provided cleared up some misunderstanding on my part (operons are not TF’s but are grouping of genes that TF’s help activate) and clarified a number of other things.

  204. 204
    gpuccio says:

    To all:

    From the paper above mentioned, a paragraph about the difference between random noise and chaos.

    What is chaos?
    When we speak of chaos, we refer to deterministic chaos. Deterministic means that if one knows the initial state of the system exactly, then the dynamical trajectory will be the same every time it is initiated in that state. However, any two initial conditions infinitesimally apart will have exponentially diverging trajectories as time proceeds making it practically impossible to predict the future dynamics—hence chaos28–31. It is important to note that the unpredictability of chaos does not arise from stochasticity—the latter refers to a non-deterministic system with noise. Noise is observed in most real-world systems and can often result in very different dynamics than the deterministic version of the same system. For example, noise can cause transitions between different states which would never occur if the system were deterministic. Thus, both deterministically chaotic and noisy systems exhibit unpredictability of their future trajectories, but for very different underlying reasons.

  205. 205
    gpuccio says:

    To all:

    The paper is about a simplified model of interaction between two different oscillating systems, NF-kB and TNF. The interaction between the two can generate, in some circumstancases, a chaotic system.

    Our investigation starts with a model of the transcription factor NF-kB that is known to exhibit oscillatory dynamics3,9,22. A schematic version of this is found in Fig. 1a and a full description is presented in the Supplementary Note 1. In this deliberately simplified model, the oscillations arise from a single negative feedback loop between NF-kB and its inhibitor IkB?, and can be triggered by TNF via the activation of the IkB kinase (IKK). We then allow TNF to oscillate.

    Indeed, the main cause of the oscillations in the NF-kB system seems to be due to the alternating degradation of the IkB alpha (the inhibitor), IOWs the activation of the dimer, and the re-synthesis of IkB alpha, a form of negative feedback.

  206. 206
    pw says:

    GP,

    at what point it is believed that the oscillations in the NF-kB system appeared for the first time in biological history?

Leave a Reply